Weekend Links #9: Stargate vs competition, Altman Lies, X buys X
Also how to protect yourself online
About the author: Peter Wildeford is a top forecaster, ranked top 1% every year since 2022. Here, he shares the news and analysis that informs his forecasts.
AI
OpenAI is now worth $300B?
OpenAI’s GPUs are melting from Studio Ghibli memes. But soon they’ll have much more money to buy more GPUs because it’s reported by Bloomberg that OpenAI is raising $40B at a $300B valuation. The round is led by SoftBank (which also is partnering with OpenAI on the data center buildout entitled “Stargate”) and over 75% of the investment is coming from SoftBank.
If successful, as seems likely, this would be the the largest private funding round of all time. This would also make OpenAI the second most valuable private company after SpaceX (at $350B). If OpenAI went public at this valuation that would put it in the top 34 companies, on par with Coca-Cola, TMobile, and Chevron.
~
The race to build data centers: Stargate is cool, but not a standout

What is OpenAI going to do with all that money? Build massive data centers at an accelerated pace and use those massive data centers to build and run better AI models.
Via a project called “Stargate” OpenAI is planning to spend $100B/yr on building data centers, starting with their first project in Abilene, Texas. Across a report from Bloomberg, a report from the Information, a press release from Crusoe (the company building the data centers), and this comment from Vladimir Nesov, we get the following information:
Construction of Stargate began in 2024 June. The first two data buildings will come online in 2025 June with the power to run 100,000 NVIDIA GB200 chips.
A GB200 is approximately 2x more powerful per chip than the current H100 NVIDIA chips in use today. 100,000 GB200s should be powerful enough to train models 6-10x larger than Grok 3 and GPT4.51.
The next phase will involve constructing six more buildings. These are scheduled to be completed by 2026 June and will increase to 400,000 GB200s.
400,000 GB200s should be powerful enough to train models 24-40x larger than Grok 3 and GPT4.5.
And there’s more coming later over 2027-2028. The Abilene location represents just the first of potentially 10 planned Stargate locations across the US.
But the key question: how does this compare to other AI companies? The answer is that xAI, Meta, Google, and Amazon are also on track:
Elon Musk’s xAI already has 100,000 H100s operational in their Memphis “Colossus” center as of June 2024. They used this to train Grok 3. But now xAI is reported to be finalizing a $5B deal with Dell to get xAI GB200s this year in their Memphis location. The exact amount is not known but could be ~100,000 GB200s in 2025. This would keep xAI within 2x of the total compute in OpenAI’s build out.
Meta is currently operating approximately 340,000 NVIDIA H100-equivalents in its data centers. They have plans to reach “600,000 H100 GPU equivalents of compute” by the end of 2025. Meta is also said to be working towards a 2GW data center that will house over 1.3M NVIDIA H100 equivalents, though this is a multi-year project. This seems to suggest Meta is on track with Stargate. (Though this may not be an apples-to-apples comparison, given the differing timelines and how the compute is used.)
Google has been developing its own custom AI chips called “TPUs” for years alongside use of NVIDIA GPUs. Google's latest TPU v5p is estimated to be ~2x as good as NVIDIA's H100. Google currently trains in the Iowa/Nebraska region with four sites totaling over 500MW capacity and are looking to add more sites in the Ohio region with building toward 1GW capacity by end of 2025. This should also keep Google competitive with Stargate. Google's approach is different as they already have demonstrated the capability to train models effectively across multiple data center regions.
Amazon (AWS) is pursuing a dual strategy of using NVIDIA GPUs while developing their own custom AI chips. AWS has announced the general availability of their Trainium2 chips, which are supposed to be ~2x as good as NVIDIA’s H100, though less tested and with caveats. AWS is using these chips to build Project Rainier, a ~400,000 Trainium2 data center estimated to be operational in 2025. This is also on track with Stargate.
So in total we have five major US companies doing large data center build outs, with no one company currently projected to dramatically overtake any other in raw compute. But we will still see much larger and better AI models than we have today as we see total compute 10x-40x across the next two years. It seems like American capitalism is working exactly as designed.
~
Sam Altman is still a serial manipulator; new “board drama” comes to light
However, it wouldn’t be OpenAI without some drama. Back in November 2023, Sam Altman was infamously fired (and later rehired) from OpenAI for not being “consistently candid”. A new WSJ article “The Secrets and Misdirection Behind Sam Altman’s Firing From OpenAI” this week adds some extra information about Altman’s issues with candidness:
Altman lied about safety testing:
During one meeting in the winter of 2022, as the board weighed how to release three somewhat controversial enhancements to GPT-4, Altman claimed all three had been approved by the joint safety board. Toner asked for proof and found that only one had actually been approved.
…and then Altman lied about safety testing a second time:
Altman had told Murati that the company’s legal department had said that GPT-4 Turbo didn’t need to go through the joint safety board review. When Murati checked with the company’s top lawyer, he said he had not said that.
…and also lied about McCauley saying Toner should be fired:
[Altman] told Sutskever that McCauley had said Toner should obviously leave the board over the article [where Toner had criticized OpenAI safety practices]. McCauley was taken aback when she heard this account from Sutskever—she knew she had said no such thing.
And on top of that, it seems like the Board wasn’t kept in the loop much. The board wasn't told about GPT4 pre-launch:
Around the same time, Microsoft launched a test of the still-unreleased GPT-4 in India, the first instance of the revolutionary code being released in the wild, without approval from the joint safety board. And no one had bothered to inform OpenAI’s board that the safety approval had been skipped. The independent board members found out when one of them was stopped by an OpenAI employee in the hallway on the way out of a six-hour board meeting.
…and the board wasn't told about Altman owning the Startup Fund:
an OpenAI board member overheard a person at a dinner party discussing OpenAI’s Startup Fund. [...] OpenAI had announced it would be “managed” by OpenAI. But the board member was overhearing complaints that the profits from the fund weren’t going to OpenAI investors. This was news to the board, so they asked Altman. Over months, directors learned that Altman owned the fund personally.
So if this is fairly clear evidence, why couldn’t the board articulate their reasoning? According to the WSJ article, they were sandbagged by Murati:
At one point, [Murati] and the rest of the executive team gave the board a 30-minute deadline to explain why they fired Altman or resign—or else the executive team would quit en masse. The board felt they couldn’t divulge that it had been Murati who had given them some of the most detailed evidence of Altman’s management failings. They had banked on Murati calming employees while they searched for a CEO. Instead, she was leading her colleagues in a revolt against the board.
Recall that it was Murati who had helped initiate the firing:
Murati described how what she saw as Altman’s toxic management style had been causing problems for years, and how the dynamic between Altman and Brockman—who reported to her but would go to Altman anytime she tried to rein him in—made it almost impossible for her to do her job.
Murati had raised some of these issues directly with Altman months earlier, and Altman had responded by bringing the head of HR to their one-on-one meetings for weeks until she finally told him she didn’t intend to share her feedback with the board. […]
Treading carefully, in terror of being found out by Altman, Murati and Sutskever spoke to each of the independent board members over the next few weeks. It was only because they were in daily touch that the independent directors caught Altman in a particularly egregious lie.
…but it sounds like the firing wasn’t rolled out very well:
That night, Murati was at a conference when the four board members called her to say they were firing Altman the next day and to ask her to step in as interim CEO. She agreed. When she asked why they were firing him, they wouldn’t tell her.
“Have you communicated this to Satya?” Murati asked, knowing how essential Microsoft CEO Satya Nadella’s commitment to their partnership was to the company. They had not. They decided that Murati would tell Microsoft just before the news was posted on OpenAI’s website.
Altman’s surprise firing instantly became an explosive headline around the world. But the board had no answers for employees or the wider public for why Altman was fired, beyond that he had not been “consistently candid” with the board.
Friday night, OpenAI’s board and executive team held a series of increasingly contentious meetings. Murati had grown concerned that the board was putting OpenAI at risk by not better preparing for the repercussions of Altman’s firing.
…and it looks like the OpenAI board didn’t see this coming:
Sutskever was astounded. He had expected the employees of OpenAI to cheer. By Monday morning, almost all of them had signed a letter threatening to quit if Altman wasn’t reinstated. Among the signatures were Murati’s and Sutskever’s. It had become clear that the only way to keep the company from imploding was to bring back Altman.
So I’m still quite confused why the board couldn’t act with more clarity and how the firing itself all got out of control. With a hat tip to Shakeel Hashim, I took a deeper look at former OpenAI board member Helen Toner’s reasoning on the TED AI show:
Toner opens with something important to remember. OpenAI’s board was not structured like a standard corporate board — it was a nonprofit board designed to ensure the organization’s “public good” mission superseded profit incentives. This structure was intended to safeguard AI safety concerns and the broader social impact of OpenAI’s work.
According to Toner, Altman often withheld crucial information, misrepresented internal developments, or outright lied. This is documented with more examples above. This behavior allegedly prevented the board from fulfilling its oversight duty and maintaining trust. Without reliable communication from the CEO, the board’s ability to evaluate or intervene was severely compromised.
In addition to the lies, two OpenAI executives (presumably Murati and Sutskever) also approached the board with serious complaints about Altman’s conduct, describing it as untrustworthy and “psychologically abusive.” They provided screenshots and documentation backing their claims. These executives expressed zero confidence in Altman changing his behavior and believed he was the wrong individual to lead OpenAI toward safe AGI.
The board members who fired Altman felt there was no longer any basis for trust — especially crucial given the board’s oversight function.
Toner emphasizes that once they decided Altman was not fit to continue, they feared he would act immediately to dismantle or neutralize the board’s power if he learned of their intentions. (This happened anyways.)
How did Altman come back? Toner says there was a widespread narrative inside OpenAI: Bring Altman back immediately or the company will be destroyed. According to Toner, staff were told — or believed — that there would either be Altman’s immediate return with a completely new board on his terms or OpenAI’s collapse. But of course there were more than two options.
…But all this still doesn’t explain why the board couldn’t circulate more of the information about Altman, even privately, to explain their reasoning. Or why Murati herself seemed so confused, given she is alleged to have instigated the situation.
Potentially the Board felt they couldn’t undermine her given she was the choice for Interim CEO. Also the board revealing that “Murati told us Altman was terrible but now she's supporting him,” seems like a no-win scenario. And possible there were many legal landmines. But still it seems hard to imagine that the board did a good job of playing this out.
~
xAI acquires X (Twitter)
Elon Musk announces that xAI has acquired X:
xAI has acquired X in an all-stock transaction. The combination values xAI at $80 billion and X at $33 billion ($45B less $12B debt).
[…] xAI and X’s futures are intertwined. Today, we officially take the step to combine the data, models, compute, distribution and talent. This combination will unlock immense potential by blending xAI’s advanced AI capability and expertise with X’s massive reach. The combined company will deliver smarter, more meaningful experiences to billions of people while staying true to our core mission of seeking truth and advancing knowledge. This will allow us to build a platform that doesn’t just reflect the world but actively accelerates human progress.
This is pretty interesting. Firstly, it’s kinda weird that Musk has so many companies in the first place — across Tesla, SpaceX, xAI (now including X/Twitter), the Boring Company, and Neuralink. But such an amalgamation is not actually all that different from holding companies like Alphabet and Meta that can combine quite diverse interests (e.g., both YouTube and Waymo in the same Alphabet or Instagram and haptic gloves in the same Meta). Perhaps separating Musk’s interests across companies has been good for innovation by making the incentives better.
But it does seem like xAI’s Grok has been fairly intertwined with Twitter. Grok uses tweets as a private data source for model training and for direct search integration, giving Grok a potential edge that other chatbots won’t be able to match. (Personally, the main reason I use Grok is for Twitter search.) Twitter is also a fairly reasonable distribution channel for Grok given the possibilities for direct integration.
But this is also probably in part just a pure stock value play. The deal seemingly allows Musk to share xAI's value with his co-investors in X. Many of the investors overlap between the two companies. This could be seen as a way to protect X investors from losing money on their investment. In this way, it could be similar to when Musk’s Tesla acquired Musk’s SolarCity in 2016, effectively bailing out SolarCity’s financial problems.
~
PSAs: Protect yourself online
Beware linked devices on Signal
You almost certainly heard about the saga where the Editor in Chief of The Atlantic was accidentally sent Houthi attack plans over Signal. This is an important story in its own right, but it also is a good reminder that while Signal is encrypted, this does not mean that it is difficult for someone to intercept your data.
Google Threat Intelligence reports Russian hackers have been exploiting Signal's legitimate "Linked Devices" feature to silently gain unauthorized access to Signal and read copies of messages. This works by tricking victims into scanning what appear to be legitimate QR codes that link their Signal accounts to attacker-controlled devices. For example, fake Signal group invites that replace legitimate redirect code with device-linking URLs.
How to stay protected?
Update to the latest Signal application version, which includes improved protections against these attacks.
Regularly checking your list of linked devices (go to “Settings” then check “Linked Devices”)
Exercising caution with QR codes. Never scan QR codes sent to you that claim to be for “joining groups” or “verification”.
~
23andMe’s bankruptcy - delete your data as a precaution
23andMe is headed to bankruptcy. Your genetic data should be safe as 23andMe’s Terms and Conditions are meant to protect data in the event of a sale of the company. But you still never know what might happen, so it is a good time to consider deleting your 23andMe data outright.
To do this:
Login
Click on “Settings” (top right corner)
In the section on “23andMe Data”, click “View”
Scroll to the end and there’s a section for “Permanently delete data”. You can also download your data.
~
Additional advice: secure passwords, two-factor authentication, and caution
Andrej Karpathy has advice about how to stay safe and secure online:
Passwords: Use 1Password password manager to make secure and unique passwords.
Two factor authentication: Most 2FA is in the form of text messages, but it turns out an attacker can call your phone company, pretend they are you, and get them to switch your phone number over to a new phone that they control, and receive those messages. Instead, use a Yubikey and/or iOS’s FaceID.
Enable disk encryption to encrypt your computer. On Mac, this is FileVault.
Exercise caution around “smart” devices that are insecure and report tons of data.
Use Signal instead of text messages.
Exercise caution around email: Email addresses are extremely easy to spoof and you can never be guaranteed that the email you got is a phishing email from a scammer. If you get an email with a link, it’s better to not click that link and instead go directly to the website. Similarly, don’t call the number given to you, but look up the number instead.
There is other advice in the article that goes even deeper on privacy, but the above is what I do and what I can independently confirm.
~
Whimsy
…Closing with a classic
This half-mile Rube Goldberg machine themed music video from OK Go apparently involved a month-and-a-half of construction, 60 takes over two days for filming, a staff of 55-60 people, and a $90,000 budget.
A GB200 chip contains 2x B200 chips per chip. A B200 is capable of ~4.5e15 operations per second on average. If new models are typically trained for 100 days, then 100,000 GB200s running for that period of time at 30% utilization would be 4.5e15 op/s/GB100 * 60s/min * 60min/hr * 24hr/day * 100 days * 100,000 GB200s * 2 B200s/GB200 * 0.3 utilization = ~2e27.
Per Epoch, Grok 3 was trained on 4.6e26 FLOP. Data for GPT4.5 is not known, but Altman stated that it was 10x the size of GPT4. Per Epoch, GPT4 was 2.1e25 FLOP, so GPT4.5 should be ~2e26 FLOP. These are 6-10x smaller than the ~2e27 models that would be trained in these data centers.
Another great post; thanks Peter.
Multiple reports of people losing faith in Altman and alleging manipulative/untrustworthy behaviour are worrying imo
"...should be powerful enough to train models 10x larger than Grok 3 and GPT4.5"
I'm a bit confused about what this means - train in how much time? How does this compare to current capacity?