NVLink Fusion Boosts Low-Latency Computing for Third-Party CPUs

Key Takeaways

1. Introduction of NVLink Fusion: Nvidia launched NVLink Fusion, a new chip-level interface that enhances its NVLink technology for third-party CPUs and custom accelerators, announced at Computex 2025.

2. Chiplet Technology Transition: NVLink Fusion changes connections from board-to-board to a compact chiplet setup, providing up to 14 times more bandwidth than standard PCIe while ensuring memory-semantic access.

3. Collaborations and Partnerships: Companies like MediaTek, Qualcomm, and Fujitsu are integrating NVLink Fusion into their products, enhancing collaboration to utilize this new technology.

4. Modular Solutions for Hyperscale Operators: Hyperscale cloud operators can create large GPU clusters with NVLink Fusion-enabled components, allowing for efficient scaling without performance drops typical of PCIe setups.

5. Positioning Nvidia as a Key Link: By licensing its technology, Nvidia aims to be a central player in the AI hardware ecosystem, allowing competitors to use its bandwidth while still fitting into its software and networking environment.


Nvidia has unveiled NVLink Fusion, a new chip-level interface that expands its proprietary NVLink technology beyond just its processors. This was announced during Computex 2025. The innovative silicon allows third-party CPUs and custom accelerators to utilize the same high-bandwidth, low-latency connection that currently links Nvidia GPUs in large-scale “AI factories.”

Transition to Chiplet Technology

NVLink Fusion shifts the connection from a board-to-board setup to a compact chiplet that designers can position alongside their own compute dies. Although it continues to utilize familiar PCIe signaling, it offers up to 14 times more bandwidth than a standard PCIe lane while maintaining memory-semantic access between devices. This enhanced fabric works alongside Nvidia’s existing Spectrum-X Ethernet and Quantum-X InfiniBand products, which manage scale-out traffic across server racks.

Collaborations and Partnerships

Multiple partners have already committed to this technology. MediaTek, Marvell, Alchip, Astera Labs, Cadence, and Synopsys are set to provide custom ASICs, IP blocks, or design services utilizing this new protocol. On the CPU front, Fujitsu is planning to integrate its upcoming 2 nm, 144-core Monaka processor with NVLink Fusion, while Qualcomm aims to connect the interface with its Arm-based server CPU. Both companies are targeting integration into Nvidia’s rack-scale reference systems without sacrificing direct GPU access.

Modular Solutions for Hyperscale Operators

Hyperscale cloud operators can now combine NVLink Fusion-enabled components with Nvidia’s own Grace CPUs and Blackwell-class GPUs, enabling them to create large GPU clusters linked by 800 Gb/s networking. This offers a modular approach to building clusters that can consist of thousands or even millions of accelerators, all without the usual performance drops seen in PCIe-only setups.

By licensing a key part of its technology stack, Nvidia is positioning itself as the essential link for diverse AI hardware rather than just a closed-box supplier. Competitors who found it difficult to match NVLink’s impressive bandwidth can now leverage it, but they’ll need to do so within Nvidia’s extensive software and networking ecosystem.

Source:
Link

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *