This week a sitting representative in US Congress, Rep. Jill Tokuda (D-HI), asked what might be the most important question in any Congressional hearing:
Stop and think about what a US Representative just asked.
The question wasn’t whether China might attack us with AI. It wasn’t about the need to race with China. The question was about whether anyone’s AI might become powerful enough that the nation loses control of the AI and the AI becomes an independent global actor.
The hearing was entitled “Algorithms and Authoritarians: Why U.S. AI Must Lead”. But what was supposed to be a routine House hearing about US-China competition became the most AGI-serious Congressional discussion in history.
The committee was realizing in real-time that when we talk AI, we’re not talking merely about better chatbots. We're talking about AI systems that might learn how to improve themselves, become superintelligent, and then independently challenge the global order.
This is where the AGI conversation is now going in some parts of Congress — not whether AGI is coming, but whether anyone will be in control after it arrives. How did this come about?
The Papal Storm
The vibe shift started in May 2025 when establishment figures began saying the quiet part loud — not tech CEOs or AI researchers, but this time, the Pope.
Pope Leo XIV spoke on May 10th of AI as “another industrial revolution,” warning about “new challenges for the defense of human dignity, justice and labor.” He labelled AI “the main challenge facing humanity.”
Ten days later, EU President Ursula von der Leyen dropped a bombshell:
When the current budget was negotiated, we thought AI would only approach human reasoning around 2050. Now we expect this to happen already next year.
Think about that. The EU’s prior multi-year budget likely wasn't premised on AGI. But now it’s EU policy to expect human reasoning in a year and to need to make budget decisions based on that.
The US political spectrum started having its own freakout. On May 21st, Vice President JD Vance mentioned in an interview he'd read about the “AI 2027” scenario (a detailed scenario forecast by AI researchers predicting AGI by 2027 through rapid recursive self-improvement and automated AI research) and was taking seriously risks of AI “getting out of control.”
By June 5th, you had Rep. Marjorie Taylor Greene (R-GA) tweeting opposition to AI legislation because she didn't want “the development of Skynet and the rise of the machines,” while Sen. Bernie Sanders (I-VT) was calling AI a “BIG DEAL” that could “wipe out HALF of entry-level white collar jobs.”
When Bernie and MTG are both worried about AI — for completely different reasons — something has shifted.
Mainstream Democrats piled on. Senator Chris Murphy (D-CT) called AI more disruptive than “the printing press or advanced medicine or the internet.”
Pete Buttigieg warned that AI would transform society “in less time than it takes an American student to complete high school.”
But all of this was just prologue to a hearing this week that really showed the shifting discourse.
A hearing that shifted the discourse
The core premise of the hearing, accepted by everyone present, is that the US is in a high-stakes ‘new Cold War’ with China over AI dominance, and losing the ‘AI race’ would have catastrophic consequences for national security and the global order.
Where the discussion got even more interesting to me was an explicit focus on risks from Artificial General Intelligence and Artificial Superintelligence. The concept of existential risk from AGI/ASI was no longer a fringe idea but discussed directly and taken seriously by both witnesses and members of Congress.
Ranking Member Raja Krishnamoorthi (D-IL) opened by literally playing a clip from The Matrix, warning about a “rogue AI army that has broken loose from human control.”
Not Matrix as a loose metaphor, but speaking of a ‘machine uprising’ as a literal thing that could happen and is worth taking seriously by Congress.
And this wasn’t some fringe member — this was the Ranking Member of the committee taking an AGI uprising seriously in an official hearing.
The witnesses didn't calm things down. Mark Beall, former Pentagon AI Policy Director, laid out the stakes:
Nobel laureates in physics and Turing Award winners in computer science are sounding the call that there could be potential catastrophic issues with very advanced AI systems that human beings may lose control of. And when the architects of these systems are purchasing remote bunkers and talking about ‘summoning the demon,’ we might be wise to start to pay a little bit of attention.
He wasn’t exaggerating for effect. He was describing literal behavior by the people building these systems.
I encourage everyone to watch the hearing in full to get a sense of how much the Congressional debate over AGI has transformed.
Sleeper agents, AI blackmail, and unemployable humans
The committee members had clearly done their homework, and what they’d learned terrified them.
Rep. Neal Dunn (R-FL) asked about an Anthropic paper where Claude “attempted to blackmail the chief engineer” in a test scenario and another paper about AI “sleeper agents” that could act normally for months before activating. While Jack Clark, a witness and Head of Policy at Anthropic, attempted to reassure by saying safety testing might mitigate the risks, Dunn’s response was perfect — “I'm not sure I feel a lot better, but thank you for your answer.”
Rep. Nathaniel Moran (R-TX) got to the heart of what makes modern AI different:
Instead of a programmer writing each rule a system will follow, the system itself effectively writes the rules [...] AI systems will soon have the capability to conduct their own research and development.
Moran then identified automated AI R&D as “a critical red line that hearkens back to the days of when we established ourselves as a superpower.
Beall confirmed this, calling AI development “alchemy” rather than science, pointing to the fact that AI companies are creating AI systems that even specialists cannot explain. Unlike traditional software where we write explicit rules, modern AI learns its own patterns from data.
As these systems become more capable, we face a growing gap between what they can do and what we can predict or control about their behavior. Once AI can improve AI faster than humans can, we can't put that genie back in the bottle. AI-designed AI systems will be even less interpretable than current models. AI doesn't need sleep, can run thousands of experiments in parallel, and iterates at silicon speed. Eventually, a first-mover AGI advantage could become overwhelming.
As a result, we lose the ability to control the pace and direction of AI development. When AI takes over these tasks, it could theoretically trigger an intelligence explosion where each generation of AI creates a more capable successor, potentially compressing decades of progress into mere months. When AGI or ASI start taking agentic actions far faster than we can comprehend for reasons we cannot understand, it will be hard to meaningfully control the future.
Where is this all going?
Another wild moment came when Rep. Ro Khanna (D-CA) asked about jobs. Beall’s response was chilling:
If you look at the stated goals of many of these companies, they want to have AI replace all humans at all jobs. It's what they say publicly [...] I worry about a future in which human beings are not just unemployed, but they're unemployable. And this breaks the notion of the free market in very important ways [...] When I hear folks in industry claim things about universal basic income and this sort of digital utopia, I study history, and I worry that that sort of leads to one place, and that place is the Gulag.
A former Pentagon official told Congress that Silicon Valley’s vision leads to Soviet labor camps. Even more wild is that nobody disagreed. No pushback. No “that's hyperbolic.” Just uncomfortable silence from the tech witnesses and knowing nods from the committee members.
This moment crystallized the chasm between Silicon Valley and Washington. Beall's logic was brutal but clear: when humans have no economic value, they become purely a cost to whoever controls the resources. And historically, regimes haven't been kind to people they view as pure cost centers. AI regimes may be no different.
The fact that Jack Clark from Anthropic didn’t even attempt to counter this narrative is telling. Either he agreed with the concern, or he recognized that in that room, at that moment, defending mass technological unemployment was politically radioactive. The Overton window on AI’s societal impact has shifted.
Overall, this hearing shows a bipartisan group of Congresspeople grappling with the specific, technical scenarios of AGI ‘loss of control’ in a way I’ve never seen before.
Vibes are great, but details matter
In five months, AGI went from unspeakable to unavoidable in Washington. The vibe shift is complete. But vibes aren't policy, and the gap between recognition and prescient action remains vast.
The hearing revealed we face three interlocking challenges:
Commercial competition: The traditional great power race with China for economic and military advantage through AI
Existential safety: The risk that any nation developing superintelligence could lose control — what Beall calls a race of “humanity against time”
Social disruption: Mass technological unemployment as AI makes humans “not just unemployed, but unemployable”
These aren't contradictory worldviews — they're simultaneous realities we must navigate. And in preparing for these, we need to be clear-eyed about what’s going on.
Firstly, while I agree with everyone on the hearing that China poses an important risk to America and American values, there is no concrete evidence that China is “racing to AGI” with any intensity. Chinese AI investments focus on surveillance and industrial automation, not recursive self-improvement. Their AI initiatives involve a few thousand GPUs, not the 100,000+ GPU monsters you see in the US. DeepSeek is not a Chinese government program.
As Dr. Thomas Mahnken, CEO of the Center for Strategic and Budgetary Assessments, Cold War historian, and witness on the panel mentioned, China is an innovator but first and foremost a “fast follower”. As the House CCP DeepSeek report showed, DeepSeek was built significantly upon US technology, with US progress fueling Chinese progress.
Additionally, Dr. Mahnken discusses China as never discussing about safety, but Dr. Mahnken is wrong about this — China does discuss safety. China doesn't want AI loss of control — the CCP has been worried about loss of control in general for decades. In fact, China just established its own AI Safety Institute equivalent in February 2025. Andrew Yao — China's only Turing Award winner — once warned that uncontrolled AI means “we are going to be eliminated”.
Where do we go from here?
So we urgently need more policymaking on the many issues pertaining to AGI. What do we do?
Borrowing from Beall’s opening statement, I also suggest “the three P's: Protect, Promote, and Prepare.”
Protect: Stop the hemorrhaging of American AI capabilities to China. The fact that ~100,000 advanced AI chips likely were smuggled to China last year represents what Beall calls “a dereliction of duty that would have been unthinkable during the Cold War.” We need better export control enforcement.
Promote: Deploy American AI globally before allies are forced to choose unfriendly alternatives. Maintain deterrence by boosting the US military through AI integration. Ensure the civilian parts of the government can keep up with the pace of change. Solve our energy bottlenecks and domestic manufacturing issues to ensure the US can remain competitive.
Prepare: Establish classified evaluations to better understand critical national security risks from AGI. Outside of government, support a robust ecosystem of third-party auditing. Require transparency from model developers so we know what we are dealing with, and establish stronger whistleblower protections so we can learn when things go wrong. Start preparing stronger defenses for potential AI threats.
This isn't about slowing down or speeding up. It's about channeling competition away from mutual destruction while maintaining American leadership.
The urgency is real, but it requires sophistication, not panic. We need policies that enhance American competitiveness while building international norms against uncontrolled AI development. We need to prepare for technological unemployment while ensuring the benefits flow to American workers first. We need to prevent China from stealing our capabilities while potentially cooperating on existential risks.
Congress is paying attention now. But the real test isn't whether Congress can quote The Matrix in hearings. It's whether they can craft policies that thread the needle between preventing catastrophe and preserving human agency.
Whether that attention produces the nuanced response this moment demands — well, I'm with Rep. Dunn. I'm not sure I feel a lot better.
> This isn't about slowing down or speeding up.
The plan sounds a lot like keeping the capability development the same, which, if left as is, is naturally speeding up. Solving energy bottlenecks to let higher AI throughput; pushing AI to military and civilian infrastructure to automate more and faster; spreading AI solutions abroad to ensure all societies are dependent on AI. It feels not that different from what accelerationism is advocating for, at least to me, at least cursorily.