Does Trump’s AI Action Plan have what it takes to win?
Ten takes on the AI Action Plan
Two weeks ago, the Trump administration unveiled “Winning the Race: America's AI Action Plan,” a sweeping 28-page document outlining over 90 federal policy actions designed to cement US dominance in artificial intelligence.
The Action Plan is ambitious in scope, touching everything from semiconductor manufacturing to worker retraining, from export controls to energy infrastructure. And there’s a lot to like — the suggested policy actions are sharp, well-informed, and clearly actionable. These ideas that would do a lot to ensure AI goes well for humanity.
But there’s still so much more to do. And for all the breadth, the Plan still contains some critical contradictions and omissions that could undermine America's stated goals. I've identified ten takes on what this Plan gets right, what it misses, and what it means for America's AI future. I really hope the Plan’s authors can take some of this to heart, build on their great success so far, and develop an even stronger way forward.
Here are my thoughts:
1. The Plan shows refreshing optimism
As much as I talk about risks and threats from advanced AI systems that merit preparation, this is emphatically not what I’m actually excited about. Historically, scientific progress has brought much wealth and opportunity to all of humanity. If AI becomes capable of automating this scientific progress and innovating across many domains, it is genuinely plausible we could enter into a true Golden Age. If done right, this would create a world where everyone is fully free and empowered to self-determine and self-actuate, without any barriers to living the lives they want to live.
I’m excited that the Plan shares this optimistic future about the world we can create together. The Plan calls AI “an industrial revolution, an information revolution, and a renaissance—all at once”. I’m happy to see the Trump administration with a plan that actually envisions this positive transformation — new materials, breakthrough drugs, radical educational innovations, and jobs that don’t exist yet. That’s exciting.
2. The Plan places security where it should be — an enabler of AI dominance, not a barrier
However, the reason I focus on threats is that security is key to grounding this future. If we lose our current geopolitical stability and move to an era where we are constantly under attack, we will not have a flourishing future. If we fail to establish security and reliability in AI systems, we won’t have AI systems that people can trust and adopt and build the future with.
The Plan recognizes this and the security sections are genuinely impressive. The Plan calls for high-security data centers for military and intelligence use, exploring requirements for secure-by-design AI systems, building incident response playbooks updated for AI-specific vulnerabilities, and creating frameworks for information sharing in AI similar to what we have for cybersecurity.
This is an important recognition that AI systems will become very important infrastructure — potentially as important or even more important than the power grid or financial system. Throughout this, we underestimate the vulnerabilities we have in adopting AI through AI’s own unreliability, or through China’s ability to steal or tamper with our technology through cyberespionage, insider threats, and chip smuggling. We cannot win the AI race if our training regimen is unreliable and easily copyable, our prize running shoes can be tampered with and smuggled, and the trophy at the end is in an unlocked case. It would be catastrophic to invest enormously in AI only to have it stolen or sabotaged.
The most important aspect of the Plan is thus that it sees this security as enabling widespread AI adoption instead of hindering it. Starting with security baked in means AI systems that organizations can actually trust. This will be especially important if you want to see adoption of AI in military and other high-risk contexts. No one will want to see the military using an AI system they don’t understand and can’t guarantee they control, but that’s exactly the only advanced AI systems currently on offer. This is why it’s great to see the Plan call for urgent research into “AI interpretability, control, and robustness”, as well as data center security — all four things we will need to see for military AI to work.
Still, implementation challenges loom. Building high-security data centers will require vetting all personnel, including non-traditional contractors. The technical standards for these facilities don't exist yet. The money for critical AI R&D is neither authorized nor appropriated. The evaluations are not designed. Creating all of this will require unprecedented collaboration between agencies with different security cultures and different levels of AI knowledge. This will be a huge undertaking.
The key insight is that without robust security infrastructure, the AI revolution will stall. Companies won’t deploy systems they can’t trust and consumers won’t use them. Without security, adversaries like China will steal our innovations and use them against us and tamper with the AI that we widely deploy. By prioritizing security as an enabler rather than an obstacle, the Plan creates the very foundation for America's AI leadership.
3. The Plan acknowledges AI’s transformative potential but not its unique challenges
However, amid the refreshing optimism about the future of AI and the need to develop it with reliability, I was disappointed not to see the Plan key in more clearly on the unique challenges of advanced AI.
The Plan correctly frames AI as powerful — as capable of reshaping “the global balance of power”. I like this. But I was surprised to see the Plan contain zero mention of “artificial general intelligence” or “superintelligence” and clearly point out where we are going. While there’s a lot of hype and uncertainty around AI, many top experts widely predict it to be possible to create AGI within just five years.
The problem isn’t the avoidance of the trendiest AI buzzwords. The problem is that the Plan focuses solely on the familiar risks from AI and ignores far more pressing future AGI problems. The Plan’s existing focuses on deepfakes, cyberattacks, and misuse for supporting illicit acquisition and use of chemical, biological, radiological, and nuclear weapons are welcome, of course. These are real concerns that deserve attention and these are real challenges that will be difficult to solve. But we must not stop there. My key concern is not just that AI might be misused by terrorists but also that AI might someday itself become a highly capable independent actor.
Maintaining human control over these highly advanced AI systems is far from guaranteed. And if AI companies are right, we don’t have much time before we see significant increases in AI ability. When the Plan talks about investing in “AI interpretability, control, and robustness breakthroughs,” that’s laudable for acknowledging that we don't fully understand or control current AI systems, but we also need to prepare for even more advanced systems.
AI systems that could soon outsmart all of humanity combined will, by definition, be very difficult to control. This could be humanity’s greatest challenge of all time, and I want the Plan to take these challenges far more seriously. There’s evidence that many politicians from both sides of the aisle have started taking these terms and threats seriously, and in any case avoiding the terminology doesn't make the reality go away.
4. The government needs better situational awareness
This lack of acknowledgement of the largest challenges from AI points to a broader issue where the government doesn’t know enough about the relevant challenges from AI and thus ends up not on track to meet them. This is not unique to the Trump administration — this has been a problem with every administration so far. The Plan is a good first step in fixing this, but we will need much more.
A key issue is that the US government has no idea what's happening inside frontier AI companies, and AI literacy is too low across the board among US policymakers. When thinking about how AI can revolutionize science, business, medicine, and many other fields, we must be mindful that AI uplifts bad actors alongside good people. Researchers have already used LLMs to find critical cyber vulnerabilities that humans missed. North Korean hackers will soon have this technology and soon can use it against American infrastructure. AI could soon help terrorists build bioweapons, create massive economic shifts, and risk full-on loss of control. But the government finds out when everyone else does, if at all.
To be clear, this isn’t the industry's fault. They face real disincentives to share — regulatory backlash, liability concerns, and competitive disadvantage. But this is a national security blindspot. The government is supposed to provide for the common defense, but they can’t do that if they don’t know what threats exist.
Industry doesn’t know if Chinese hackers are infiltrating their networks. They lack threat intelligence on how adversaries might weaponize their systems. They lack nation state level skills to secure against adversaries and they need this from the government.
Thus it’s exciting to see the Plan task multiple agencies with building evaluation capabilities. OSTP, DOE, CAISI, NSC, and the Intelligence Community are all supposed to “prioritize, collect, and distribute intelligence on foreign frontier AI projects that may have national security implications.” I’m excited to see these activities develop. I hope we end up with a US government ecosystem that fully understands the unique risks and opportunities of AI.
One specific failure I was surprised by was seeing a variety of US policymakers completely blindsided by the DeepSeek launch in January. This was stunning to me as the Chinese AI lab had been publishing impressive efficiency gains in public over the past year. The innovations in mixture-of-experts architectures were documented in May. The v3 paper proving their capabilities came out nearly a month before Washington's collective freakout. Yet when DeepSeek r1 launched, it was widely misunderstood and misinterpreted and sent undeserved shockwaves through DC as if it materialized from thin air.
The DeepSeek shock could have been entirely avoided if someone in government had been tracking Chinese AI efficiency research and briefing policymakers on its implications. Instead, we got panic and overreaction that undermined confidence in American AI leadership. The Plan’s proposal for regular DOD-IC ‘AI net assessments’ comparing US and adversary capabilities is a start. But these assessments need to be living documents and need to be widely socialized across the DC ecosystem, not annual reports that are out-of-date before being published and exist only to gather dust on a shelf.
And in doing this, we also need to keep our eye on the ball. We shouldn’t be tracking STEM PhDs or open-source downloads, but key metrics that track developments towards AGI and ASI. Without this early warning system, we'll keep getting surprised. And in a field where six months can represent a generational leap in capabilities, surprise is a luxury we can’t afford.
5. CAISI is tasked with everything, but this Plan is dead without proper resources
I was very excited to see in early June that the Trump Commerce Department was keeping the AI Safety Institute, transforming it into the Center for AI Standards and Innovation (CAISI).
The Plan gives CAISI a lot of work. This small NIST center is supposed to:
Evaluate Chinese models for Communist Party alignment
Build national security AI evaluations
Help other agencies understand AI and build evaluations
Develop security standards for data centers
Support DOD and IC security frameworks
and more
This is exciting, as CAISI has the talent, initiative, and mandate to get a lot of great work done. But this ambitious mandate would challenge even a well-funded agency, and CAISI is not well-funded. The entire $10M annual budget of CAISI is a small fraction of what Meta’s Zuckerberg has offered in pay for just one star researcher. CAISI isn’t even codified in law and could disappear with the next administration’s priorities.
CAISI needs to recruit top AI researchers and thus potentially compete with private sector salaries, or at least not force massive paycuts. CAISI needs computing resources to actually test frontier models. CAISI needs authority to coordinate across agencies with different cultures and priorities. Without adequate resourcing, CAISI can’t deliver on these critical national security priorities it's tasked with. The Plan will be dead-on-arrival if Congress doesn't act.
6. The export control strategy doesn’t do enough to prevent Chinese AI
The Introduction to the Plan states “we must prevent our advanced technologies from being misused or stolen by malicious actors” with constant vigilance. The Plan continues to have an entire section on strengthening AI compute export control enforcement, explaining the importance of denying foreign adversaries access to the advanced AI compute that is “essential to the AI era”.
There’s a lot to like about this. But I worry that it is mismatched with reality, given that the US government has begun to permit sales of an advanced AI chip — the H20 — to China just a week before the AI Action Plan came out. Why the change in course?
We must keep in mind that China only domestically manufactures a small quantity of their own compute, with ~85% of training compute and ~95% of inference compute coming from illicitly smuggled Nvidia chips (potentially billions of dollars worth), illicitly acquired TSMC materials, and legal imports of the Nvidia H20:

This gives the US very large leverage over Chinese compute, if only we were to use that leverage correctly. And Nvidia is a large offender here — despite being an American company, no other company has done more to undermine the US’s dominance in the AI race. Not only does Chinese AI progress depend significantly on Nvidia chips but every Nvidia chip sold to China and every Nvidia chip illicitly smuggled to China is one Nvidia chip that could have been powering American AI instead.
While every major tech CEO attended Trump’s inauguration, Nvidia’s Jensen Huang was in Beijing. Despite documented evidence of smuggling at the scale of billions of dollars of Nvidia chips per year and open sales on the internet for anyone to see, Huang denies that smuggling even takes place. Despite documented evidence of the Chinese military seeking Nvidia chips, Huang denies this as well. With this pattern, it becomes difficult to trust Jensen Huang and Nvidia. Sometimes you have to wonder what side they are on.
The situation is dire. Literally while the AI Action Plan was being written, an additional one billion dollars of Nvidia chips were smuggled into China. As Mark Beall, the former Director of Strategy and Policy at the DoD Joint Artificial Intelligence Center during Trump’s first term said, “the fact that the Chinese military can freely buy, steal, download, and weaponize American technology represents a dereliction of duty that would have been unthinkable during the Cold War.”
The administration thus needs to decide — what does “constant vigilance” mean when we continue to allow China to build the vast majority of its AI infrastructure from American components? The current approach of doing both undermines our credibility and our security. We need more.
7. The “China is racing” narrative needs a reality check
The central conceit of the AI Action Plan is the first line of the Introduction — “The United States is in a race to achieve global dominance in AI”. I think “US AI dominance” is a somewhat unfortunate framing and should be thought more of “US allied dominance” — a shared battle of the US, allies, and partners versus our authoritarian adversaries, where the US uses US-built AI to power the free world and continue as the global leader in AI development, enabling global stability and freedom.
However, when it comes to racing, we must be nuanced and accurate. I do expect some geopolitical ‘winner takes all’ or ‘winner takes most’ dynamics to achieving AGI, so in that sense the racing is very accurate. It’s true that whoever has a lead in developing AGI will have a significant say in shaping the post-AGI society. And it’s truly important for that to be shaped with freedom and American values, as opposed to authoritarianism. But when it comes to racing China, the race looks different than it might appear, as China does not appear to be racing to build AGI first, nor building aiming to build AGI as fast as possible. This race as seen by China looks different from how the US government is conceiving it.
Firstly, it’s often said that regulation in the US comes at too much of a risk of trading off victory to China. But this misses that China’s AI regulations are actually far more restrictive than America’s, with requirements to have models reviewed and approved by regulators before deployment, submit to security assessments and content moderation requirements, label AI-generated content, and comply with strict data localization rules. China's approach reflects strong control and careful management of AI development, which is very different than the US’s ‘permissionless innovation’ approach.
Secondly, we can see a massive disparity in capital deployment between the US and China. US companies are spending lots on AI infrastructure. Microsoft is at $80B for FY2025 and increasing to $120B/yr. Google (Alphabet) is at $75B for 2025 and increasing to $85B, with the majority going toward “technical infrastructure, primarily for servers, followed by data centers and networking”. Amazon is at ~$105B in 2025, with CEO Andy Jassy saying the “vast majority” is for AI infrastructure in AWS. Meta announced $60-65B for 2025. No Chinese company is anywhere close to this.
Instead, my best understanding is that China wants to pursue a “fast follower” strategy in AGI. For example, China’s BYD wasn’t first to electric cars like the US’s Tesla, but BYD fast-followed, hyper-optimized, and achieved Tesla-beating scale through relentless cost efficiency. It is concerning if China could ‘win’ the AI race in a similar way to which they have ‘won’ the ‘electric car race’, where the US gets AGI first but China harnesses it better.
This matters because it means our current approach of trying to stay ahead through pure innovation while leaving the door open for China to copy and scale might be the wrong strategy. If China can consistently turn our $100B AI R&D investments into $10B Chinese manufacturing empires through efficiency gains, we’re subsidizing eventual Chinese dominance. The fast follower strategy only works if the leader keeps sharing their homework — which is exactly what we're doing through inadequate export controls and rampant smuggling.
8. Retraining might not be enough to handle AGI-driven disemployment
The AI Action Plan aims to “ensure that our Nation’s workers and their families gain from the opportunities created in this technological revolution”. The Plan has three main planks for this — creating jobs from a manufacturing boom with AI, encouraging AI retraining and skill development, and studying AI’s impact on the labor market.
These are smart policies that seem like a good place to start. However, I worry this goal will still be hard to accomplish and the Trump administration might be severely underestimating the scope and difficulty of this policy goal.
There’s a fairly widespread notion that because past tech revolutions did not lead to widespread disemployment, the AI revolution won’t either. And there is some merit to this — after all, the widespread deployment and use of the ATM in the 1970s and 80s didn’t lead to the elimination of bank tellers. Instead, the number of bank tellers in the US actually increased from about 300,000 in 1970 to over 600,000 by 2010. This is because ATMs made it cheaper to operate bank branches, so banks opened more branches. And tellers shifted from cash-handling to relationship banking — selling products, solving complex problems, and providing customer service. The technology changed the job rather than eliminating it.
When AI becomes autonomous enough to eliminate all domain-specific human tasks AND general enough that there aren't enough domains left for human workers, we face an unprecedented challenge. Previously, when a single technology was adopted, there were other job areas for people to move into. But what if you adopt a general-purpose technology that can achieve capability across every area simultaneously? Human employment would rely only on comparative advantage. As AI gets exponentially cheaper while human wages have a floor, that comparative advantage erodes. Wages could fall below subsistence levels.
This is even more concerning when realizing the speed of AI adoption versus the speed of retraining. The Industrial Revolution unfolded over a century of gradual mechanization. The AI revolution is happening much faster, with increased adoption happening in just mere years. By the time you finish retraining from one job to another, the new job might have been automated too.
Of course, we don’t know exactly how AI will automate the economy. It may well turn out that retraining is sufficient. This makes data and retraining a fine place for the Plan to start. But we must be careful to understand that “of course AI will create new jobs too and so everything will be fine” is not something we can rely on. Massive unemployment seems very plausible when you make a technology that can automate a vast majority of jobs, and the administration needs to be ready for this.
9. Anti-renewable ideology undermines ‘all of the above’ energy and US dominance
The Plan correctly identifies energy as AI’s binding constraint. Data centers need massive power. The semiconductor fabs to supply them need more. The Plan calls for streamlined permitting, grid expansion, and embracing “new energy generation sources at the technological frontier.” So why is the administration simultaneously waging war on renewable energy?
I don’t say this out of some crusade for climate change awareness. This is pure economic logic. At current margins, adding renewable capacity is cheap. Solar panels and wind turbines have no fuel costs. When the sun shines or wind blows, you burn less gas and save money. Battery costs have plummeted, making short-term storage economical. A diverse grid with multiple generation sources is more reliable than depending on any single technology.
If we want to outmanufacture China, we will need an “all of the above” energy plan that includes both Trump’s love for “big beautiful coal” and wind, solar, geothermal, and nuclear power. China is building renewables and fossil fuels and nuclear all at the same time, adding 400 gigawatts to their electrical grid annually while the US adds virtually none:
But instead of fully rising to this abundance challenge, the administration is adding new regulatory barriers to renewable deployment — exactly the opposite of what we need. Interior Secretary approval for every wind and solar project on federal land. Paused offshore wind leasing. Higher tariffs on solar panels. This isn’t what energy abundance looks like.
More concerningly, China’s pragmatic approach to energy abundance is precisely what enables their fast-follower strategy in AI. Every year China can power thousands more data centers while the US struggles with energy constraints. The US must also rise to this challenge. Making American AI more expensive to power than Chinese competitors because of culture war politics is strategic malpractice.
10. Success depends on unprecedented government execution speed
The Plan is ambitious. But everything depends on execution speed that the federal government has rarely achieved before.
Consider what needs to happen in the next 6-12 months:
CAISI needs funding, authorization, and staff to handle its 16+ different mandates
DoD must create technical standards that don't yet exist for the high-security data centers that don't yet exist
Multiple agencies need to coordinate on export control enforcement and we don't even know what the overall strategy should be or how we should manage the trade-offs
NEPA reforms must actually accelerate permitting, not just rearrange paperwork from one law to another
Many different federal agencies need to develop significant amounts of AI evaluation capabilities
Workforce retraining programs need to be designed and deployed before mass layoffs hit
International allies need to be convinced to adopt our export control regime
Each item on this list faces its own obstacles. DOD procurement moves at a glacial pace even for simple items, let alone novel AI infrastructure with unique reliability challenges. Inter-agency coordination typically takes years, not months. And Congressional action, when needed, is also not known for its speed. By the time agencies finish their comment periods, working groups, and pilot programs, the entire AI landscape may have shifted.
The Plan recognizes that “details will determine outcomes.” But it doesn't grapple enough with this fundamental tempo mismatch. Every month of delay is equivalent to years in previous technology transitions. To accomplish everything in time, we may need a war-time level mobilization absent the clear and present threat to scare people into action.
I’m cautiously optimistic that the Trump administration can rise to the challenge here — after all, Trump’s first term saw Operation Warp Speed, one of the fastest government initiatives where it really mattered. But as the Plan’s authors know, the Plan’s success requires not just doing the right things, but doing them faster than the government has moved before. Saying that will be hard is an understatement.
The bottom line
Overall, this Plan is a good start. I’m happy that the Trump administration is taking AI security and reliability seriously, promising essential steps to secure AI systems, to address national security concerns, and to build government and private-sector capacity for responsible AI adoption. The AI Action Plan represents serious engagement with AI's transformative potential. It correctly identifies many key challenges: infrastructure needs, security requirements, workforce transitions, and export control enforcement. The administration deserves credit for moving beyond rhetorical flourishes to specific policy actions.
But the Plan also embodies deep contradictions that could prove fatal. It aims for AI dominance while ignoring the challenges and opportunities of AGI and superintelligence. It calls for energy abundance while ignoring a proper ‘all of the above’ energy strategy. It promises worker protection while not really specifying how that would work. It talks about keeping exports out of China while still selling the H20 chip and making minimal plans to address widespread smuggling. Most critically, it requires execution speed the federal government rarely achieves in a domain where months matter more than years. All of these aspects need important attention if we want to succeed.
In the end, winning the AI race isn’t about having the best plan. It’s about implementation, adaptation, and speed. The clock is ticking, and it’s moving faster than most people realize.




I agree with all that you wrote today. I am especially concerned about mass unemployment. If Trump could successfully remove all undocumented workers, millions of jobs will open up, but almost all of them are manual labor. How many white collar workers are physically able to muck out corrals, stack hay bales, pick strawberries, live in temporary cow and sheep camps, work in slaughterhouses, mop floors? And many of these jobs are vulnerable to robotics, indeed many in agriculture already have been automated. There even now are dairy operations where nearly everything is automated.
When I was a kid, many science fiction stories and even articles in major magazines projected a world of delightful leisure. I don't see how this could happen unless there is revolutionary change. Perhaps even a Butlerian Jihad? Or would it unfold as suggested in the book Superintelligence, where people by the billions will opt for life in paradise inside a computing system, their messy, enmiserated, useless bodies disposed?
I agree that the ATM is an important data point for understanding technological impact on jobs, but I think it's probably also important to notice that the number of bank teller jobs has been falling since ~2010, likely due to online banking. (https://datausa.io/profile/soc/tellers)