AI this week: Trump, Musk, and Altman's High Stakes Games
How political realignment and corporate maneuvering are reshaping the future of AI
About the author: Peter Wildeford is an AI analyst and top forecaster, ranked in the top 1% every year since 2022. Here, he shares the news and analysis that informs his forecasts.
While France attempted to establish its own AI leadership with the AI Action Summit, an equally important story this week is how US political realignment and corporate maneuvering are reshaping AI's future.
I covered France yesterday. Let’s try to catch up with everything else that happened.
And subscribe if you want more of this sort of coverage:
Trump
…Scale AI working with US AI Safety Institute
The US AI Safety Institute, established in 2023, represents America's first major institutional approach to evaluating AI systems and establish security standards for AI development. The institute initially operated with bipartisan support but faces uncertainty during the presidential transition.
Scale AI began as a data labeling provider for AI companies before expanding into AI evaluations. On Monday, it was announced that ScaleAI was selected as the first approved third-party evaluator for the US AI Safety Institute.
This is interesting news for a few reasons:
First, we didn’t actually know if the US is still going to have an AI Safety Institute, and this news might be the first confirmation that AISI is continunig. The AISI was created in the Biden administration under the Department of Commerce and Trump’s administration hasn’t yet commented on where they plan to take it. Most AISI leadership and staff resigned during the presidential transition and AISI staff did not attend the Paris summit.
This also shows connections between industry and Trumpworld. In March 2021, Michael Kratsios joined Scale AI as its managing director and head of strategy. Now, Kratsios is Trump’s nominee to be Director of the Office of Science and Technology Policy and is bringing along his Scale AI connections. Insofar as this helps the US government stay informed about AI progress within industry and helps AISI continue, this is a good thing.
Scale AI may actually bring good evaluations. Scale AI has already developed several evaluation frameworks, including tests for sensitive domains like nuclear and cybersecurity (WMDP), web-browsing AI stress-testing (BrowserArt), and they partnered with the Center for AI Safety on "Humanity's Last Exam".
~
…How Sam Altman got into Trumpworld
SF power moves? Trump? AI? Elon Musk? Multi-billion dollar data centers? In a New York Times article that literally checks every box for this Substack, Kang and Metz detail “How Sam Altman Sidestepped Elon Musk to Win Over Donald Trump”.
You might think that Altman, a former Democrat donor and Musk Public Enemy #1 would be dead-on-arrival in Trumpworld. You’d be wrong. Apparently the secret was appealing to Trump's real estate background, love of grand announcements, and affinity for announcing large infrastructure projects.
Some other key things from this article:
Altman told Trump that the tech industry would achieve artificial general intelligence during the Trump administration.
OpenAI’s key path into the Trump administration was via Secretary of Interior Doug Burgum, as well as Larry Ellison (co-founder of the software company Oracle) and Masayoshi Son (founder of the Japanese conglomerate SoftBank).
OpenAI executives met privately with Trump in Las Vegas, giving him a sneak preview of Sora prior to public release.
Altman regularly communicated via a private text thread with Biden administration Commerce Secretary Gina Raimondo.
While other tech executives like Zuckerberg were meeting Trump in Mar-a-Lago, the best Altman could do was a meeting in Palm Beach with future Commerce Secretary Howard Lutnick. But after donating $1 million to Trump’s inaugural fund, Altman was invited to the inaugural festivities, though got put in the overflow room.
Why did Microsoft dump Stargate? Apparently it was because of OpenAI’s infamous-but-temporary firing of Altman. Also the Biden administration had expressed concern over OpenAI’s efforts to secure additional money from investors in the Middle East and Microsoft worried that the government would be slow to provide approvals for a project that required enormous amounts of land and electricity. But it seems like Trump is poised to change both of those things.
How many tech CEOs does it take to work a projector? Turns out the answer is three, and it was Altman who actually got the projector working.
Just goes to show that Paul Graham was right about Altman - “You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king.” Though in my opinion, five years is a bit of an overestimate.
Musk
Elon Musk has been doing a lot of things lately, but one additional thing he has been doing is annoying OpenAI, the organization he played a role in co-founding and initial funding and now currently is suing.
…The initial mission
The initial situation is that OpenAI originally started as a non-profit with a stated mission “to ensure that artificial general intelligence benefits all of humanity.” To raise capital, they moved to a more complicated hybrid structure: the foundation controls a for-profit LLC, with investor returns capped at 100x their initial investment and profits beyond this cap flowing back to the nonprofit.
OpenAI now wants to convert fully from a non-profit to a “public benefit corporation” which is how Anthropic is structured, where the company is a for-profit but is allowed to maximize social value in addition to shareholder value and doesn’t have to exclusively maximize profit. This new structure will be able to bring in more investment. Indeed, OpenAI’s most recent $6.6B in financing is contingent on transitioning to the for-profit structure and the financing will need to be paid back as a loan if the transition isn’t met.
This is a large transition from OpenAI’s original stance, when they declared in bold on their website: “it would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world”. This statement is still there, but it soon may not apply.
…What it takes to convert to for-profit
However, one can’t just “convert” a non-profit into a for-profit. That’s not how the legal world works, and is unfair to all of your donors. There are a lot of complex steps involved but essentially the for-profit has to “buy out” the non-profit by paying the non-profit fair market value for all of its intellectual property, access to potential profits, control rights, the “merge and assist clause”, other rights, as well as the fundamental idea that OpenAI is currently in the lead towards creating transformative AI systems and may not uphold the non-profit mission by default if sold. Each of these aspects alone are potentially worth billions.
…What is fair market value for OpenAI? And Musk’s offer
So this then gets us to the question of “what is fair market value for the OpenAI non-profit?”. OpenAI initially proposed $37.5B compensation to its nonprofit arm, but as Lynnette Bye explores in “Is OpenAI being fair to its non-profit?” this may be an underestimate and that the true value could exceed $78B.
This fair market value will be scrutinized by a nominally independent OpenAI board and the courts. However, another wrinkle is Musk who came in with a $97.4B bid to buy the OpenAI non-profit. This bid should be strong evidence that the fair market value of the OpenAI non-profit is now at least $97.4B, instead of the original proposed $37.5B. It’s going to be hard to explain how to decline the $97.4B offer and sell for $37.5B instead.
I don’t think Musk is actually going to buy OpenAI, or even trying to. I think he’s trying to make a point and complicate things for OpenAI. And on this point, Musk is right.
…So what happens from here?
In "Emergency pod: Elon tries to crash OpenAI's party", Rose Chan Loui, Executive Director of the Lowell Milken Center for Philanthropy & Nonprofits at UCLA Law, explains how OpenAI's planned conversion to a public benefit corporation with a vague new mission may face heightened scrutiny from state attorneys general especially following Musk's competing offer.
OpenAI can only change its charitable purpose if the original purpose becomes illegal, impossible, impracticable, or wasteful to pursue - criteria that likely won’t be met. Furthermore, Altman's immediate rejection of Musk's offer potentially undermines the required “arm's length” nature of nonprofit board decisions — the decision is not supposed to be Altman’s to make.
The outcome likely depends on complex negotiations between multiple parties. The nonprofit board must evaluate board composition proposals, financing credibility, and antitrust concerns, while state attorneys general weigh both legal requirements and political considerations.
Jungwon on Twitter comments from her experience converting from a non-profit to a for-profit nothing that the decision will have to be made by the nominally independent OpenAI non-profit board and Sam Altman will have to recuse himself from the decision due to a conflict of interest. The non-profit cannot show favoritism to OpenAI LLC or Sam Altman over other potential independent offers and must pursue an independent valuation of its assets — a valuation made more complicated by Musk’s bid. The best indicator of fair market value is actual offers to buy from the market and that’s what Musk did.
Jungwon offers a few possible moves for Altman:
Assert that Musk's offer is not real as Musk has a long history of claiming to have funding he doesn’t have. But this probably won’t work as Musk can likely get the money.
Try to get a bunch of other offers and suggest Musk’s offer is an outlier and thus not fair market value, but that alone shouldn’t be sufficient.
Assert that the true value of OpenAI is its talent and that the talent won’t work for Musk and thus a sale to him would squander the value.
Altman could otherwise argue that a sale to Musk would violate the mission of OpenAI.
Jungwon closes by mentioning that the non-profit board is under no obligation to sell at all, besides pressure from Altman. This will be a highly consequential decision for the board to make.
…Time for another OpenAI board fight?
Jungwon is right. The board is supposed to be fully independent and has an obligation to not just do what Altman says. Individual OpenAI board directors could independently challenge any deal they view as unfair. In Musk's OpenAI Bid Leaves Door Open For Boardroom Activism, reporters Drew and Palazzolo reveal how California corporate law gives each individual OpenAI director the power to sue if they believe the nonprofit is accepting too little equity in the planned conversion to a public benefit corporation. This creates leverage for Musk's strategy of forcing OpenAI to give more equity to the nonprofit than initially planned, potentially at the expense of existing for-profit shareholders.
If directors remain passive, enforcement would fall to state attorneys general, but director-initiated legal challenges would likely receive more sympathetic treatment from judges. This will be one of the highest stakes boardroom deals to watch, and Musk just made it more complicated.
Altman
…Sam Altman has some observations
Like it or not, Sam Altman is in the driver’s seat for the future of humanity, so when he blogs, we all listen. Altman has three observations as AGI comes into view:
1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.
2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.
Altman notes that “if these three observations continue to hold true, the impacts on society will be significant”, which is probably an understatement. For example, see his vision of 2035:
Anyone in 2035 should be able to marshal the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine.
Altman says his mission still is “ensure that AGI (Artificial General Intelligence) benefits all of humanity” and that he “never want[s] to be reckless”. I hope he succeeds at these goals, because there are tremendous stakes. In the meantime, policymakers and citizens need to keep up.
~
…OpenAI has a roadmap
So what is Altman cooking that will take advantage of his three observations? On Wednesday, Altman tweeted an “OpenAI Roadmap Update for GPT-4.5 and GPT-5”:
We will next ship GPT-4.5, the model we called Orion internally, as our last non-chain-of-thought model.
After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
The free tier of ChatGPT will get unlimited chat access to GPT-5 at the standard intelligence setting (!!), subject to abuse thresholds.
Plus subscribers will be able to run GPT-5 at a higher level of intelligence, and Pro subscribers will be able to run GPT-5 at an even higher level of intelligence. These models will incorporate voice, canvas, search, deep research, and more.
Let’s unpack this, especially given that Altman also gave a Q&A at the University of Tokyo Center for Global Education which gave a bit more detail.
In the Q&A, Altman mentioned that each jump from GPT-2 to GPT-3 to GPT-4 involved 100x the amount of compute spent on the model. He suggests that so far OpenAI has “gone all the way up to about 4.5 on this scale”, referring to GPT-4.5. This gives us clues to the actual size of GPT-4.5… I’m unsure whether Altman is talking about a linear or logarithmic scale. If it’s logarithmic, it would be 10x GPT-4. GPT4 was estimated to be at 2e25 FLOP, so GPT-4.5 would be 2e26 FLOP - 1e27 FLOP.
The big difference between now and when GPT-4 came out is that scaling the size of the base model isn’t the only important thing. The way modern models (2024 era) work now is that they are a combination of multiple steps:
a base model, like GPT-4, that enables the model to have an understanding of the world
post-training, such as fine-tuning for capability improvement
a reasoning model, using reinforcement learning and inference-time compute (compute spent at the time of solving that enables the computer to think through the problem)
This is similar to how a human thinks, where the base model is roughly equivalent to our intuitions and snap judgements, the post-training is roughly equivalent to us learning how to do specific tasks through practice developing better intuitions and muscle memory, and the reasoning model is equivalent to us thinking hard about a particular problem and solving it using pencil and paper or other tools.
The key takeaway of 2024 has been that post-training and reasoning can also be scaled, along with the base model, to get better performance. Previously, that wasn’t known to be the case.
Though keep in mind that, contra common misconceptions about DeepSeek, across all of these stages compute is still very important for all this scaling, so it’s still a compute-heavy and capital-heavy enterprise.
These reasoning models are powerful. Altman says “they are an incredible new compute efficiency” and have “performance on a lot of benchmarks that in the old world we would have predicted wouldn't have come until GPT-6”. But they don’t improve universally. Altman points out that they can make a model very good in certain areas (for instance, logic, math, or code), but not magically improve in every ability. (Though, as Miles Brundage points out, reasoning models are still somewhat useful at improving tasks outside math/science by learning general purpose problem-solving skills like decomposition and backtracking.)
Other things Altman answered during the Q&A:
Stargate will have 100x the compute capacity as what OpenAI has today. (That matches my mathematics, current frontier models are trained on ~2e26 FLOP and it looks like Stargate will be able to produce models at ~3e28 FLOP. That should be ready in 2028 or 2029.)
How should we prepare for our AGI future? Altman thinks that no human will outrun AI on raw horsepower for math, programming, or science tasks — like trying to be better at multiplication than a calculator. Instead, he thinks the key is to work with AI to do more creative and sophisticated things, such as orchestrating big ideas. Freed from manual tasks, you can do high-level problem-solving and creative pursuits. Altman thinks the fundamental skills for the future are adaptability, resilience, creativity, and the ability to work with AI as a tool.
Open source OpenAI? Altman says “we’re going to do it” when asked about open sourcing model weights, though he’s uncertain exactly when or how. (Likely OpenAI wouldn’t release the model weights of the cutting-edge models, though.)
Risk talk was focused on near-term harms: Discussion of risks from AI were present in the conversation, but focused mainly on potential lethal AI use in the Israel–Gaza conflict (Weil’s response: “the AI is not ready for that use case,” and “we require a human in the loop”), the ethics of web scraping and copyright (Altman says OpenAI is trying to refine how it sources data), the ethics of “exploiting” Kenyan workers for content moderation, the environmental cost of training huge models (Altman notes that energy use per query is dropping a lot and that he wants to use AI to advance clean energy breakthroughs), and how AI could erode human values (Altman says “We’ll work on that” without going into depth). Other risks (e.g., bioweapons, cyberweapons, loss of control) are not mentioned.