AI Hardware Infrastructure Boom: Humanoid Robot Revenue Surges 2203%, Silicon Photonics Drives Next-Gen Compute Overhaul

Full-Stack Explosion of the AI Hardware Infrastructure Chain: From 2,203% Revenue Growth in Humanoid Robots to Strategic Investments in Silicon Photonics—Accelerating the Foundational Restructuring of Compute
The global AI race has quietly moved beyond the “superstructure” phase—dominated by algorithms and models—and entered a foundational restructuring wave, using the physical world as its canvas and hard tech as its brush. Q1 2025 data reveals a clear, powerful trend: the AI hardware infrastructure chain is undergoing a full-stack, cross-cycle, high-certainty collective boom. This is not an isolated breakthrough by individual companies, but rather a structural bull market driven by synchronized growth across more than ten specialized subsectors—including optical transceivers, advanced packaging, high-speed connectors, liquid-cooling systems, intelligent power supplies, edge AI chips, and humanoid robot platforms.
Humanoid Robots: Leaping from Concept Validation to Commercial Cash Cows
UBTECH’s latest financial report delivers a landmark signal: revenue from its humanoid robot solutions surged 2,203.7% year-on-year to RMB 821 million (Source 7), surpassing traditional education and service robotics for the first time to become the company’s largest revenue stream. This figure far exceeded market expectations—and reflects a substantive commercial breakthrough:
- A Shenzhen-based new-energy vehicle manufacturer has deployed 200 Walker X units for battery-pack quality inspection and flexible assembly;
- A Hefei-based semiconductor packaging & testing facility has introduced 50 industrial-grade robots to handle wafer transport in ultra-clean environments;
- Most critically, its “Robot-as-a-Service” (RaaS) model is rapidly scaling across the Yangtze River Delta manufacturing cluster, with average monthly service fees per unit reaching RMB 120,000 and a customer renewal rate of 91%.
This marks the definitive transition of humanoid robots from lab demonstrations to B2B, large-scale revenue generation. Upstream suppliers—including servo motor manufacturers, high-precision gear reducers, and real-time operating system (RTOS) vendors—are reaping parallel benefits: related sectors have already booked orders through Q3 2026.
Silicon Photonics: Breaking the “Optical-in, Copper-out” Bottleneck in AI Data Centers
As compute demand grows exponentially, traditional copper interconnects are hitting hard physical limits in bandwidth, power consumption, and latency. NVIDIA’s US$2 billion strategic investment in Marvell’s silicon photonics technology (Source 8) represents a direct assault on this fundamental constraint. Silicon photonics chips integrate optical signal modulation, transmission, and detection onto CMOS-compatible silicon platforms—enabling per-channel data rates of 1.6 Tbps, 40% lower power consumption, and nanosecond-scale latency. Marvell has already delivered its first batch of 800G silicon photonic transceivers for NVIDIA’s GB200 NVL72 servers, with stable yields at 78%. Domestic players—including Zhongke Xintong and Accelink—are also accelerating deployment: their 800G silicon photonic modules have passed stress tests on Huawei’s Ascend 910B AI clusters. According to LightCounting, the global silicon photonics market is projected to exceed US$5.2 billion by 2027, growing at a compound annual growth rate (CAGR) of 34.6%, establishing itself as the next core compute foundation—after GPUs.
Advanced Packaging & High-Speed Interconnects: The Physical Glue of the Chiplet Era
Compute advancement is no longer solely reliant on transistor scaling, but increasingly on heterogeneous integration via chiplets. This shift has elevated advanced packaging (e.g., CoWoS, InFO) and high-frequency, high-speed connectors to center stage. ASE and JCET report sustained 100% utilization of their CoWoS-L capacity; Luxshare-ICT’s mass-produced PCIe 6.0 board-to-board connectors achieve insertion loss below 1.2 dB and have been adopted by multiple AI server OEMs. Notably, domestic substitution is progressing from “functional” to “high-performance”: JCET announced that its XDFOI™ 2.5D packaging technology has achieved a yield exceeding 92%, with costs 18% lower than TSMC’s equivalent offering. This advancement directly enables stable operation of Huawei’s Ascend 910B chips in thousand-GPU clusters—and explains why Huawei’s 2025 revenue reached RMB 880.9 billion (Source 10), with R&D expenditure accounting for 21.8%: its investment focus has pivoted from generic chip design toward foundational engineering capabilities—including co-optimization of packaging, signal integrity simulation, and multi-physics coupling (thermal–electrical–optical).
Green Power × Compute: Intelligent Power Supplies & Liquid Cooling Build a Sustainable Compute Foundation
Exponential compute growth poses severe energy challenges. Sungrow Power reported net profit of RMB 13.46 billion in 2025 (Source 17); its growth trajectory is now deeply intertwined with AI infrastructure:
- Its integrated “source-grid-load-storage” solution was deployed at the Zhongwei Smart Computing Center in Ningxia, enabling photovoltaic direct-drive plus storage peak-shaving to reduce PUE to 1.08;
- Its proprietary liquid-cooled inverters—jointly deployed with Huawei FusionPower—at a ten-thousand-GPU cluster in Dongguan achieved 45 kW per rack and tripled thermal efficiency.
Meanwhile, liquid-cooling specialists—including Gosen and Envicool—saw order volumes rise 210% year-on-year, while optical module leaders—such as Innolight and Eoptolink—launched integrated “optical module + micro-liquid-cooling kit” solutions. This signals the emergence of a closed-loop value chain—“green electricity → intelligent power distribution → efficient thermal management → high-density computing”—transforming compute from an energy black hole into a flexible load and value amplifier within next-generation power systems.
The Underlying Logic of the Full-Stack Bull Market & Cross-Market Synchronicity
The sustainability of this hardware bull market rests on three structural drivers:
- Physical Law Constraints (slowing Moore’s Law forcing architectural innovation);
- Geopolitical Restructuring (advanced-node restrictions accelerating deep domestic substitution);
- Commercial Model Evolution (new paradigms like RaaS and compute leasing generating stable cash flows).
Cross-market linkages are especially pronounced:
- When A-share optical transceiver and liquid-cooling stocks rally, Hong Kong–listed AI server contract manufacturers concurrently strengthen;
- On the day Marvell’s U.S.-listed stock hit an all-time high, A-share silicon photonics material suppliers surged to daily trading limits;
- Demingli’s earnings preview for Q1 2026—projecting net profit of RMB 3.15–3.65 billion (Source 1)—is powered by volume ramp-up of its LPDDR5X memory modules customized for AI servers. Here, memory chips, interconnect devices, thermal systems, and power management converge into an inseparable value network.
When UBTECH’s robots stride confidently down production lines, when Marvell’s silicon photonic chips glow inside server racks, and when Sungrow’s inverters convert sunlight into streams of compute current—we witness more than rising financial figures. We witness a quiet yet monumental infrastructure revolution. It makes no headlines on LLM parameter leaderboards—but it lays down an irreplaceable physical foundation for AI civilization, measured in micrometer-level packaging precision, picosecond-scale optical latency, and kilowatt-level liquid-cooling efficacy. This hard-tech marathon—born in the transistor, matured in the photon, and stabilized by green power—is only now entering its most challenging, decisive deep-water phase.