Z.ai and Huawei aren't defeating US export controls
Loosening restrictions now would surrender America's technological advantage
About the author: Peter Wildeford is a top forecaster, ranked top 1% every year since 2022.
There's nothing like an otherwise very normal and expected Chinese AI model release to make people lose their minds, misunderstand the technical landscape, and thus spin misguided narratives. First it was DeepSeek. Next it was Manus. Today, it seems to be Z.ai's GLM-4.5.
Specifically, a recent Wall Street Journal op-ed by Aaron Ginn, “China's Z.ai and America's Self-Defeating AI Strategy”, argues that Z.ai's GLM-4.5 is proof US-led controls have backfired spectacularly by enabling rather than constraining Chinese innovation. This article is important, as it was retweeted by David Sacks, Trump's AI and Crypto adviser. This confusion is dangerous because following Ginn’s advice would undo the very thing that is keeping the US ahead in the AI race.
Contra Ginn, Z.ai and Huawei are actually proof that the US export controls are working, and the path to ensure US AI dominance is to tighten export controls further. A careful examination of the evidence — from benchmark data to industry reports to admissions from Chinese executives themselves — reveals that Ginn has it completely backwards. Chinese AI continues advancing but faces fundamental constraints. Export controls impose real and growing costs. And manufacturing gaps are widening, not narrowing. Let’s dig in.
Deflating the hype around GLM-4.5
As we've seen with DeepSeek and Manus, Chinese AI developments follow a predictable hype cycle that distorts serious analysis of technological competition. GLM-4.5 is no different.
The way this hype cycle works is that bold initial claims about revolutionary Chinese breakthroughs generate strong headlines, before being debunked by technical reality checks that unfortunately receive far less attention. For example, Ginn writes in his article that “China's DeepSeek shocked the global AI community in January” — but DeepSeek emphatically wasn’t a shock to the global AI community but rather widely tracked for months ahead of time. Ginn also writes that DeepSeek built “a frontier model at a fraction of Western costs”, but this needs clarification — DeepSeek achieved results in line with the Pareto frontier of model cost-efficiency at the time, and now has lost ground to leading US models.
Now we're doing the hype round again with Z.ai. Doubling down on incorrect claims about DeepSeek, Ginn moves on to make incorrect claims about Z.ai, claiming that DeepSeek “has been outdone by a Chinese company subject to US sanctions”, referring to Z.ai's GLM-4.5 model launched a week ago.
Z.ai is a Chinese AI company formerly known as ‘Zhipu AI’ before changing their name last month to reflect a more international focus (no relation to Musk's xAI). The “US sanctions” here refers to the fact that Z.ai is on the US Entity List, a list of companies prohibited from importing US tech, because the US accused Z.ai of assisting the Chinese military.
So, is GLM-4.5 good? Ginn boasts that GLM-4.5 “matches or exceeds Western standards in coding, reasoning and tool use”, but GLM-4.5's own published benchmark scores show GLM-4.5 worse than DeepSeek, Anthropic, Google DeepMind, and xAI models at nearly all the benchmarks listed. And this is the best possible light for GLM-4.5 — because GLM-4.5 is still so new, there currently are no independent third-party benchmark scores so we don't know if they are inflating their scores or cherry-picking only their best results. For example, DeepSeek's benchmark scores were lower when independently assessed.
Regardless, GLM-4.5 themselves admitting to being generally worse than DeepSeek’s latest model means that we can upper bound GLM-4.5 with DeepSeek’s performance. And when measured against multiple independent benchmarks, DeepSeek is found to be 4-10 months behind the US state of the art. Thus GLM-4.5 is not currently a match for US models.
You might then point to GLM-4.5’s impressive model size and cost. Yes, it is impressive that GLM-4.5 is a small model that can fit on eight H20s, as Ginn points out. But OpenAI's recently launched 'Open Models' also out-benchmark GLM-4.5 despite running on even smaller hardware, such as a single 'high-end' laptop. And Google's Gemini 2.5 Flash has a similar API cost and similar performance as GLM-4.5 despite coming out several months earlier. This also ignores the fact that GLM-4.5 handles only text, while major US models can also handle images, audio, and video.
Similarly, I'm unsure why Ginn is so impressed that z.ai “projects it will have millions of downloads and millions of dollars of revenue in 2025” when OpenAI is hitting nearly 700 million weekly active users and projects $20B in annual revenue, or ~20,000x as much as z.ai. Compared to a Chinese population of 1.4 billion and a world population of over 8 billion, we're very far from widespread international adoption let alone the full-on “dependency” of Chinese AI models that Ginn describes. Like DeepSeek, GLM-4.5 is impressive but not an imminent threat to Western dominance.
A reality check on Chinese chips
GLM-4.5 isn't the only thing that Ginn is hyping — he's also very excited about China's domestic manufacturing of computer chips, mainly from the companies Huawei (equivalent to US's Nvidia) and SMIC (equivalent to Taiwan's TSMC). To hear Ginn tell it, US export controls have pushed China to develop their own chips and this has gone so well that China will be exporting these chips globally and rivaling the West. Per Ginn, “Huawei's GPUs are quickly filling the gap left by the Biden administration's adoption of stricter export controls.”
But this gets the facts about Huawei and SMIC very critically wrong. Huawei isn’t filling any gap at all. Perhaps the most striking contradiction to Ginn's narrative comes from Huawei itself. In a recent interview with People's Daily, Ren Zhengfei, Huawei's founder, explicitly stated that the US “overestimates” his company's chip capabilities and that Huawei's Ascend AI chips “lag the US by a generation.”
Tests confirm this, showing Huawei's latest Ascend 910C chips suffer from overheating, crashes, and buggy software. They lack support for modern AI features like FP8 precision computing, which enables more efficient training. Huawei's workarounds involve bundling multiple inferior chips to achieve higher performance, but this brings increased power consumption, heat generation (leading to overheating), and system complexity. There also is the matter of CUDA, the software stack that makes Nvidia chips so much easier to integrate into AI, which is significantly better than Huawei's software, CANN.
When it comes to manufacturing the chips themselves, there are even more issues. While Taiwan's TSMC achieves 90%+ yields on mature 7nm processes, China's SMIC reportedly struggles with yields closer to 50% on their '7nm' process (which itself performs closer to TSMC's 10nm). This means for every 100 chips they attempt to manufacture, only about 50 are functional — a massive inefficiency that makes Huawei chips substantially less economically viable.
This is because China lacks access to extreme ultraviolet (EUV) lithography — the technology required to produce the most advanced semiconductors. These machines, costing approximately $200M each, represent perhaps the most complex technology ever created, requiring the combined scientific output of the Netherlands, Germany, Japan, and the US.
Ginn reports that “China's foundry capacity has vastly surpassed Washington's expectation, and China is shipping chips abroad several years ahead of schedule”. Ginn offers no source for this claim, a surprising omission for such a significant assertion. It’s also false — the US government's own assessment from last month is that Huawei can only manufacture 200,000 chips this year, a number that is insufficient for fulfilling even the Chinese market demand, let alone the global market. It’s also a number far below the millions of chips TSMC and Nvidia produce annually.
Similarly, Ginn alleges that “Beijing is increasing international dependency on its models and hardware.” While this certainly has been true in the past for fiber optics and 5G tech, there is no evidence of any international sales of Huawei AI chips outside of China. There was one planned sale within Malaysia that was reportedly blocked by the US government, but beyond that single aborted transaction, there are literally no documented sales and even that sale didn’t go through. Ginn's claims of international dependency on Chinese AI hardware are not supported by any available evidence.
In fact, Huawei hasn't even managed to create dependency within China, with Nvidia still having a majority of the market and major Chinese tech companies have yet to make substantial purchases of Huawei chips, instead preferring Nvidia's offerings. The US Commerce Department's May warning that using Huawei's advanced AI chips without authorization could violate export regulations has further chilled demand, particularly from companies with international operations.
At the end of the day, China was always going to innovate. China was already committed to chip self-sufficiency years before to the US export controls began. The CCP government continues to use their vast buying power to promote Huawei chips even when it doesn’t make economic sense to do so. Export controls are not needed to encourage Chinese indigenization, as the CCP can continue to encourage it by fiat.
Huawei chips will improve over time, and SMIC’s manufacturing capabilities will surely advance. There will eventually be many good Chinese-made AI chips. But the current reality is that Chinese hardware remains substantially behind and loosening US-led export controls would only help China reach US parity faster.
The Chinese AI ecosystem runs on Western tech
This brings us to the central conceit of Ginn’s analysis that “Washington's tack so far has been to try to limit Chinese entities’ access to advanced hardware” and that GLM-4.5 and Huawei are proof that this has failed. But there is no evidence that GLM-4.5 runs on Huawei chips, and Ginn instead mentions GLM-4.5 running on Nvidia H20s (which notably they shouldn’t have legally due to being on the Entity List). DeepSeek was also trained on Nvidia chips. Chinese AI still remains dependent today on American hardware, just less capable versions of it, and this is a good thing.
According to data from Epoch AI, when we look at notable Chinese AI models released between 2017 and 2024, over 90% of language models were trained on Western hardware. The first model reportedly trained entirely on Chinese hardware was not released until January 2024, after several years of training models on Western hardware.

This gets even more damning for China when you zoom out. According to recent RAND analysis, the US controls 77% of global AI compute capacity while China has only 12%. And according to a recent IAPS and CNAS analysis, even that Chinese compute depends heavily on Western technology.
The most damaging breach came in September 2024 when TSMC, lacking basic due diligence, illegally produced 3 million advanced chip dies for Huawei through a Chinese proxy company. This single violation provided China with compute equivalent to approximately 1 million Nvidia H100s. It’s these TSMC-produced dies are what actually power Huawei's ‘domestic’ Ascend 910B and upcoming 910C chips.
And prior export control implementation suffered from a pattern of failures that need fixing. The 2022 controls contained specification errors that allowed Nvidia to create the A800 and H800 chips that enabled DeepSeek. Chinese firms stockpiled years of high-bandwidth memory after the industry leaked upcoming restrictions.
Combined with billions of dollars in smuggled Nvidia chips and billions of dollars of legal imports of the Nvidia H20, and the vast majority of China’s AI ambitions are built on Western technology. To hear Ginn tell it, China has no access to American tech and built everything themselves, but in actuality ~85% of Chinese training compute and ~95% of Chinese inference compute comes from the West:

But these implementation failures don't negate the strategic value — they show the need for better enforcement and the power of where export controls could go. Even with these leaks, export controls have substantially slowed China's progress and maintained America’s lead. As Liang Wenfeng, CEO and founder of DeepSeek, said in an interview: “The problem we are facing has never been funding, but the export control on advanced chips.”
Ginn is right that DeepSeek and Z.ai were forced to be efficient because of lack of access to the best US chips but that doesn't mean they wouldn’t prefer more advanced hardware and their models likely would be much better if they were granted such access. Letting China buy as many advanced GPUs as they want obviously closes the US-China AI gap more quickly.
This also matches the CCP government’s own behavior. While the fast follower strategy has served China well in other industries, it becomes more challenging without reliable access to leading technology. If it were indeed the case that the US export controls have accidentally supercharged the Chinese AI and semiconductor industries, then China would be putting tariffs or bans on US chips. They don’t. Instead, China takes every chance it can to complain about export controls and negotiates to ease them — a clear signal that the controls are working as intended.
We also must keep in mind that the US export controls are playing a longer game. We're barely one new chip generation into the current export controls, and the export controls will bite harder as chip technology continues to outpace China. The state of the art of chips in the US have already shifted to the B200 chip and Chinese companies won’t have any B200s except via smuggling. This will allow American models to be even more powerful at the same level of capital in a way that Chinese companies won't be able to match. By the time we get to the next generation of Nvidia chips after the B200, the gap between China and the US should be even larger.
The evidence suggests export controls are achieving their intended effect: forcing inefficiency while maintaining technological advantage, in a way that compounds over time. While Chinese companies work with inferior domestic chips or smuggled alternatives, US companies deploy massive compute clusters using next-generation hardware at large scale. OpenAI's Stargate project envisions $100 billion in annual compute spending using chips China cannot legally or practically access at a capital scale that Chinese tech companies are not matching.
And every Z.ai engineer working on making their AI work with inferior chips is one not pushing the frontier of AI capabilities. Every dollar spent on smuggling premiums or inefficient power consumption is one not invested in research. The uncertainty also undermines long-term planning — companies cannot build multi-year research programs around smuggled chips or inferior domestic alternatives.
How to win with US AI abroad
One thing Ginn is right about is that President Trump and his AI Action Plan are right to call for “scaling supply and adoption abroad” and “exporting American AI and hardware while cutting regulations that slow production at home”. Additionally, Ginn is correct to say that “Every U.S.-branded LLM shapes AI norms globally. Success comes from ubiquity of platforms, not exclusion or restrictions.”
But global exports should emphatically not mean handing US technology directly to our adversaries. As Mark Beall, the former Director of Strategy and Policy at the DoD Joint Artificial Intelligence Center during Trump’s first term said, “the fact that the Chinese military can freely buy, steal, download, and weaponize American technology represents a dereliction of duty that would have been unthinkable during the Cold War.”
Ginn wants us to believe that Z.ai's GLM-4.5 proves export controls have backfired - that by restricting China's access to advanced chips, we've somehow made them stronger. This is precisely backwards and the actual facts tell a different story.
The real danger isn't that export controls have failed. It's that we might abandon them just as they're starting to compound. Following Ginn's advice would be like lifting sanctions on the Soviet Union in 1985 because they built a decent tractor.
Trump's instinct to export American AI globally is right. But that means to allies and partners, not adversaries. Tighten enforcement, crack down on smuggling, make sure TSMC can't ‘accidentally’ produce millions of chips for Huawei again.
The evidence is clear. Export controls are working. Now is the time to double down, not give up.


Thanks a lot for this much needed no-nonsense reality check!
This is very well laid out. I enjoy reading things that confirm my priors, especially with good sourcing.
Thinking one can produce millions of domestic GPUs that will train the next frontier model because they produced enough to power a few CloudMatrix384s is like thinking China can feed all their people because they can grow some rice on a few hectares. Scale and efficiency matter. Yield per hectare and total hectares are the equivalent of GPUs per wafer (and the extra loss that may come in during packing) and total wafer throughput. The next AI frontier model needs more rice; so much more rice.
Keep the export controls to starve their AI. Controls gum up the motor of progress by taking any talent they have and focusing it on workaround strategies which keep them in the follower paradigm longer. Yes, they will innovate but so will we.