AI titans clash over visions for America's future
OpenAI, Anthropic, Google offer competing AI priorities
America's AI giants are locked in a policy battle that will shape the nation's technological future — and their competing visions have just been formally submitted to the Trump Administration's Office of Science and Technology Policy.
The Trump Administration is still crafting its AI Action Plan and has requested public comment. The Action Plan is due by July 22 and the US government is looking for feedback from various companies, think tanks, and civil society organizations about what should be in it. This plan will “define the priority policy actions needed to sustain and enhance America's AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.”
US’s pre-eminent AI tech titans OpenAI, Anthropic, and Google have submitted their respective recommendations. This is normally a very boring thing — government document submissions from AI policy teams. But these rarely scrutinized policy documents offer an unusually transparent window into how leading AI developers view both the technology's trajectory and the proper role of government in responding to AI developments.
Make no mistake: while these companies all claim to prioritize American technological leadership, their proposals reflect dramatically different worldviews about AI's trajectory and the proper role of government. And these differences are as much about company business interests as their genuine assessments of what's best for the country.
Let’s dive into what is going on, what this reveals about the plans and postures of the different tech companies, and what this suggests for the future of America.
The Competing Visions: Security, Freedom, or Pragmatism?
Perhaps the most consequential disagreement involves how quickly AI capabilities are advancing and how the government should respond.
Anthropic’s security
Anthropic projects that powerful AI systems “could emerge as soon as late 2026 or 2027” with capabilities “matching or exceeding that of Nobel Prize winners across most disciplines”, equivalent to “a country of geniuses in a datacenter”. Such developments, if realized, would represent a profound transformation of technological capabilities.
Following from these capabilities, Anthropic is very explicit about severe national security risks, emphasizing urgency and seriousness around AI's potential to aid malicious activities. Anthropic has a particular focus on AI providing critical assistance with biological weapons development — a skill Anthropic expects to be present in their next model. Anthropic advocates for government infrastructure and capacity to assess and manage these risks proactively, including potential mandatory testing for high-risk capabilities. Anthropic’s framework positions security as the paramount value, arguing implicitly that freedom of innovation must be balanced against potential catastrophic risks.
While this approach would create greater regulatory burden, Anthropic backs it with substantial evidence. Their submission uniquely admits that their latest Claude model shows “concerning improvements in capacity to support aspects of biological weapons development” – a level of transparency about security vulnerabilities not matched by their competitors. This acknowledgment of specific risks provides a concrete basis for government action rather than theoretical concerns.
Anthropic's focus on security and systemic risk is not only a technical assessment but also a potential bid to align with US national security interests and potentially become a partner of choice for the US government. Anthropic’s willingness to detail specific concerns about biological weapons and cyberattack capabilities signals a company ready to collaborate with government on a security-first approach.
To their credit, Anthropic's more detailed technical assessment shows a level of transparency and candor that its competitors lack. Anthropic admits much more responsibility for the dual-use nature of their innovations and doesn’t present it as all rosy upside. Additionally, Anthropic’s analysis is backed by specific technical benchmarks that acknowledge concrete security concerns rather than vague generalities.
~
OpenAI’s freedom
OpenAI uses present-tense revolutionary language, suggesting we're already “at the doorstep of the next leap in prosperity: the Intelligence Age.” OpenAI frames AI primarily through the lens of “freedom” and democratic values, repeatedly positioning American AI development against “CCP-built autocratic, authoritarian AI.” This cleverly frames any potential regulation as not just a technical decision but an ideological one.
In contrast to Anthropic’s approach, OpenAI instead proposes a “tightly-scoped framework for a voluntary partnership between the federal government and private sector” and relies on industry involvement with government being entirely voluntary and optional, incentivized through lucrative government contracts rather than rules. It's smart corporate positioning — appearing collaborative while structuring voluntary cooperation to maximize their advantages.
OpenAI has seen the same risks as Anthropic, acknowledging in their OpenAI Deep Research model card that “several of our biology evaluations indicate our models are on the cusp of being able to meaningfully help novices create known biological threats”. However, unlike Anthropic, OpenAI chooses to sidestep explicit discussion of classical national security risks such as AI-enabled biological weapons and cyberweapons, focusing instead on geopolitical competition against the Chinese Communist Party.
OpenAI is much more directly confrontational regarding China's intentions and capabilities compared to Anthropic or Google. OpenAI discusses threats like IP theft, coercion through infrastructure initiatives, and regulatory arbitrage as mechanisms China could exploit to overtake US leadership in AI. While these risks have a genuine basis, it is notable that OpenAI solely emphasizes threats to their own business interests while downplaying potential risks that might constrain their profitability.
~
Google’s pragmatism
Google, meanwhile, takes a more pragmatic and conservative approach. Google talks about how “the potential of artificial intelligence is nearly unlimited, and we’re already seeing how it can revolutionize healthcare, accelerate scientific discovery, and transform our economy for the better”. This is grandiose, but ironically a fairly conservative assessment compared to how Anthropic and OpenAI talk about transformative AI capabilities.
Google’s approach reflects a traditional corporate conservatism that sees government primarily as an infrastructure enabler rather than either a security partner or values enforcer. Their submission implicitly argues for a more hands-off government approach that would preserve Google’s market advantages.
Google acknowledges AI’s potential misuse by bad actors but stresses that disproportionate risk-averse regulations could hinder innovation and competitiveness. Google is less explicitly alarmist compared to Anthropic, though more risk-aware than OpenAI.
Instead, Google advocates for “focused, sector-specific, and risk-based AI governance” rather than what they consider to be sweeping regulations, cautioning against overly broad restrictions on foundational model development. Additionally, both Google and OpenAI call for federal pre-emption of state laws, preventing individual US states from creating their own AI regulations. While this prevents “a patchwork of states” creating inconsistent and burdensome standards across the country, it is important that these state rights are not replaced by absolutely no concern for risks at all.
Navigating the differences
AI companies have a lot of responsibility — they are potentially building technologies that will reshape the economy, labor markets, national security, and even the balance of geopolitical power.
Tech company CEOs themselves are honest about this being too much to bear. When Anthropic CEO Dario Amodei and Google DeepMind CEO Demis Hassabis were asked if they worried about turning into a ‘Robert Oppenheimer’ figure, both of them said they believe no single person should carry that responsibility. But who should carry such a responsibility? This is generally where the government steps in.
Anthropic is clearly interested in the government playing such a role. However, for Google and OpenAI’s actual responses, it doesn’t sound like they want government to play a large role here, except for enabling Google and OpenAI to make more profit, build out more infrastructure, and have lucrative government contracts.
Note that this is a very different tone coming out of Google and OpenAI than what they are accused of. OpenAI and Google are not advocating for regulation that they can comply with in a way that stifles “little tech” — instead they are advocating for minimal regulation.
~
Instead, we should recognize that the key visions of security, freedom, and pragmatism that Anthropic, OpenAI, and Google offer are not actually in tension. We can have a good balance of all three.
This starts by assessing the claims of different major AI companies seriously. These differences about AI capabilities and timelines aren't mere technical disagreements — they fundamentally determine whether government should act with urgency or deliberation.
If Anthropic is right about transformative capabilities arriving within 1-2 years, waiting patiently to craft the right policy approach could be disastrous. However, if Google's more measured assessment is correct, premature intervention definitely does risk stifling innovation through unnecessary constraints.
Unfortunately, the federal government must make this judgment without the luxury of hindsight. Anthropic at least deserves credit for providing concrete projections that can be evaluated, while OpenAI's revolutionary rhetoric and Google's cautious generalities offer policymakers much less to work with.
The most reasonable approach likely involves acknowledging legitimate elements from each vision while recognizing their limitations. OpenAI's emphasis on maintaining American leadership provides important competitive context and cannot be ignored. And even Google's caution about overregulation offers practical wisdom about true costs of premature intervention. Yet Anthropic's security concerns deserve serious consideration, especially when backed by specific evidence of capabilities and risks — these cannot be ignored and will ultimately require some sort of government action soon. And it’s not like Anthropic is advocating for anything heavy-handed, it just looks heavy-handed compared to a suggestion of basically no rules at all.
Properly balancing these visions will be key to ensuring we achieve the amazing benefits that AI is capable of without falling prey to the critical risks. I wish the Trump administration the best of luck in developing their AI Action Plan.