China-US AI Compute Race Enters New Phase: Quantum Leap, Domestic ASICs, and Energy-Optimized Data Centers

TubeX Research avatar
TubeX Research
5/14/2026, 3:01:05 AM

U.S.-China AI Compute Competition Enters a New Phase: Three Foundational Breakthroughs Reshape the Global Technological Power Structure

The global AI race has quietly moved beyond the surface-level boom of applications and descended into the “deep waters of compute”—a domain that will determine technological sovereignty for the next decade. Since Q3 2024, China has achieved strategic breakthroughs simultaneously across three critical foundational dimensions: quantum computing, application-specific integrated circuits (ASICs), and the energy infrastructure underpinning intelligent computing centers. The “Jiuzhang-4” photonic quantum computing prototype demonstrated quantum advantage exceeding 1,000×; Tencent announced it will significantly increase AI capital expenditures in the second half of 2024, driven by mass production of domestically developed ASICs; and CATL, through a related-party strategic investment, acquired a stake in CenturyLink (now known as “CenturyLink IDC”), marking the first time a battery manufacturer has deeply engaged in the energy architecture of data centers. These are not isolated technological leaps—but rather a systemic, full-stack upgrade of foundational capabilities. They signal that the U.S.-China AI compute competition has formally entered a new phase defined by autonomous controllability, physical-layer synergy, and standard-setting authority.

“Jiuzhang-4”: Crossing the Threshold from Quantum Supremacy to Practical Utility

The “Jiuzhang-4” photonic quantum computing prototype, developed by Professor Pan Jianwei’s team at the University of Science and Technology of China (USTC), solves Gaussian Boson Sampling problems involving 512 photons approximately 1,000× faster than today’s fastest supercomputers. This result not only breaks its own world record but—more crucially—brings sampling scale within reach of practical thresholds required for specific cryptographic analysis and quantum chemical simulations.[8] Unlike earlier demonstrations of quantum advantage focused purely on principle validation, Jiuzhang-4 exhibits markedly enhanced engineering maturity: integrated photonic circuit stability reaches 99.97%; single-photon source efficiency exceeds 85%; and power consumption of its cryogenic control unit has dropped by 40%. In other words, quantum computing is transitioning from laboratory “muscle-flexing” to an industrial-grade “toolkit” capable of solving real-world problems. The U.S. National Quantum Initiative (NQI)’s latest assessment report concedes that China has opened a “substantive lead window” in the photonic quantum computing pathway. This leadership does not aim to replace classical computing—but rather to pioneer a new paradigm of computation. As large-model training hits the wall of Moore’s Law, quantum-classical hybrid architectures will become the pivotal lever for breaking through the ceiling on AI model complexity.

Tencent Doubles Down on Domestic ASICs: Shifting Capex Toward “Autonomous Compute Density”

In its Q2 2024 earnings report, Tencent explicitly signaled that AI-related capital expenditures would “increase significantly” in the second half of the year—driven primarily by the mass production and deployment of its in-house ASIC chip series, “Zixiao.”[15] Unlike general-purpose GPUs, “Zixiao” is purpose-built for large-model inference. Fabricated using advanced 7nm process technology, it achieves 3.2× higher energy efficiency than NVIDIA’s A100 GPU on ResNet-50 inference tasks. More importantly, its architecture is fully based on the open-source RISC-V instruction set—bypassing licensing risks associated with ARM—and integrates compute-in-memory units via Chiplet-based heterogeneous packaging. Tencent Cloud has announced plans to deploy over 500,000 “Zixiao” chips in 2024 alone, powering thousand-GPU-scale clusters for its Hunyuan large language model. This decision carries profound industrial implications: capex priorities are shifting from purchasing compute to building compute density. Once access to compute no longer depends on a single international supplier, Chinese internet giants gain unprecedented control over cost structures and technical iteration timelines. According to TrendForce, by 2025, China’s AI ASIC market will account for 35% of the global total—with over 60% consumed internally by domestic internet firms. This trend is already compelling industry leaders—including TSMC and JCET—to accelerate adaptation to China’s indigenous IP ecosystem.

CATL’s Strategic Entry into CenturyLink: Rewriting the Physical Laws of Intelligent Computing Centers

CATL, through its wholly owned subsidiary Ruiting Times, joined forces with a state-backed fund to acquire a 19.2% equity stake in CenturyLink (a leading third-party IDC operator in China) for RMB 2.86 billion.[18][19] On the surface, this appears to be a financial investment—but in reality, it has quietly launched an “energy-compute coupling revolution.” CenturyLink operates China’s largest third-party IDC cluster, consuming over 4 billion kWh annually; CATL commands the world’s most mature lithium iron phosphate (LFP) energy storage systems and liquid-cooling thermal management technologies. Their joint “Zero-Carbon Intelligent Computing Center” pilot project—launched at CenturyLink’s Langfang data center—deploys a 200 MWh energy storage system to arbitrage peak/off-peak electricity pricing, while channeling server waste heat into the battery thermal management loop—extending battery cycle life by 22%. Even more consequential is the standard-setting impact: traditional IDC PUE (Power Usage Effectiveness) metrics can no longer capture the value of such “energy reuse.” CATL is now collaborating with China’s Ministry of Industry and Information Technology (MIIT) to develop the Evaluation Standard for Energy Efficiency Utilization (EEU) of Intelligent Computing Centers, which will incorporate battery thermal management performance, waste-heat recovery rates, and direct green-power supply ratios as core evaluation criteria. This initiative is expected to catalyze rapid growth across niche sectors—including liquid-cooled servers, immersion cooling systems, and solid-state battery energy storage—with related markets projected to exceed RMB 100 billion by 2027.

Industrial Catalytic Effects of Full-Stack Autonomy and Global Order Restructuring

These three breakthroughs form a tightly coupled, self-reinforcing cycle: quantum computing provides future algorithmic acceleration; ASICs deliver today’s foundational compute; and new-energy IDCs ensure sustainable, scalable compute delivery. This vertical integration across algorithms–chips–energy is dismantling the U.S.-dominated legacy paradigm built on “GPUs + electrical grids + air-cooled server rooms.” Its spillover effects are already visible across the supply chain: Advanced Micro-Fabrication Equipment (AMEC) has seen a surge in orders replacing ASML etching tools; Shenghe Semiconductor’s Chiplet advanced packaging capacity utilization has hit 100% for three consecutive quarters; and Sugon’s liquid-cooled server shipments surged 170% quarter-on-quarter in Q2. Most critically, China is exporting these new standards globally via digital infrastructure projects under the Belt and Road Initiative—Saudi Arabia’s NEOM smart city computing center, for instance, has officially adopted CATL’s energy storage solution and Cambricon’s “Siyuan” ASIC chips. This marks a decisive evolution: technology standard export has shifted from product sales to infrastructure definition rights.

The global AI supply chain’s balance of power is tilting. When compute is no longer merely about stacking silicon chips—but rather a precise orchestration of quantum states, domain-specific architectures, and physical energy dynamics—the competition has transcended individual technical metrics and ascended to the level of systems engineering capability. China’s triple breakthrough sends an unambiguous signal: the ultimate moat in the AI era lies not in application-layer traffic monopolies—but in the programmability and synergistic coordination of the physical world itself. This silent “deep-water compute war” has only just reached its decisive inflection point.

选择任意文本可快速复制,代码块鼠标悬停可复制

Related Articles

Cover

China-US AI Compute Race Enters New Phase: Quantum Leap, Domestic ASICs, and Energy-Optimized Data Centers