SAN JOSE, Calif. — Broadcom Inc. on Tuesday unveiled a new networking chip designed to help companies build massive artificial intelligence computing systems, intensifying its competition with Nvidia Corp. as the race to dominate AI infrastructure accelerates.
The new chip, called the Thor Ultra, enables cloud operators to connect hundreds of thousands of data processing units into one large, cohesive computing system a capability critical for running complex models that power AI tools such as ChatGPT and Gemini.
“In a distributed computing system, the network plays an extremely important role in building large clusters,” said Ram Velaga, senior vice president and general manager of Broadcom’s Core Switching Group. “I’m not surprised that anyone in the GPU business wants to make sure they are participating in networking.”
Broadcom’s latest move underscores its push to expand in the high performance networking segment as demand for AI-driven infrastructure surges.
The company has long supplied chips that move data inside large scale data centers, but the Thor Ultra represents a leap toward ultra dense, low latency interconnects needed for AI workloads.
The launch comes amid a growing battle with Nvidia, whose own networking arm bolstered by its 2020 acquisition of Mellanox Technologies has become a crucial part of its AI ecosystem. Nvidia’s InfiniBand networking technology currently dominates many AI clusters, giving it a strategic edge over competitors.
Broadcom, known for its Ethernet-based solutions, is betting that open standards and scalability will help it capture a larger share of the AI networking market.
The Thor Ultra, according to Broadcom executives, was designed with both flexibility and power efficiency in mind qualities that could appeal to hyperscale data center operators such as Google, Amazon, and Microsoft.
Industry analysts view Broadcom’s latest chip as a significant step forward, but one that faces formidable competition.
“Broadcom is clearly signaling it intends to challenge Nvidia’s dominance not just in GPUs but across the AI stack,” said Patrick Moorhead, chief analyst at Moor Insights & Strategy.
“Networking is the lifeline of AI systems, and any latency or bottleneck can dramatically reduce performance. Broadcom is addressing exactly that pain point.”
Moorhead added that Ethernet traditionally seen as less performant than Nvidia’s InfiniBand has made substantial gains in recent years. “Thor Ultra may help Ethernet close the gap for AI workloads, especially if paired with custom software optimizations,” he said.
Linley Gwennap, principal analyst at The Linley Group, said Broadcom’s long-standing relationship with Google provides it a strategic advantage.
“Broadcom has deep design ties with Google’s Tensor Processing Units, and that partnership has already generated billions in revenue. The Thor Ultra may be a natural extension of that collaboration.”
Broadcom’s networking chips currently power an estimated 40% of global cloud data center networks, according to IDC. Nvidia’s Mellanox technology, by comparison, controls roughly 60% of the AI-specific networking market.
The Thor Ultra reportedly offers up to twice the bandwidth of its predecessor, supporting 1.6 terabits per second per port and low latency switching for dense AI clusters.
Broadcom said the chip’s architecture allows the creation of “exascale” networks, linking hundreds of thousands of GPUs or AI accelerators under one unified fabric.
During a September tour of Broadcom’s San Jose network chip testing labs, company engineers demonstrated the infrastructure used to design and validate the Thor Ultra, including custom cooling systems and real-time simulation tools to test traffic loads.
“Each generation of these chips has to move faster and handle more connections with greater precision,” said Anjali Rao, a Broadcom design engineer who helped lead the testing phase. “Our goal is to make scaling AI infrastructure as seamless as scaling software.”
At Silicon Valley’s AI Hardware Forum on Tuesday, the announcement drew strong interest from system architects and network engineers who see growing bottlenecks in today’s AI clusters.
“AI models are growing exponentially, but networking hasn’t always kept up,” said Kevin Lu, a systems engineer at a Bay Area startup developing large scale vision models.
“If Broadcom can make Ethernet based solutions as efficient as InfiniBand, it would open up a lot more flexibility for companies like ours.”
Others noted that Nvidia’s end to end ecosystem remains difficult to rival. “Nvidia’s advantage lies in integration their GPUs, software, and networking all work seamlessly,” said Sandra Kim, a data center consultant. “Broadcom will need strong partnerships to compete at that level.”
Broadcom is expected to begin mass production of the Thor Ultra in early 2026, with initial deployments targeted for hyperscale cloud data centers. Analysts predict that if successful, the chip could double Broadcom’s AI networking revenue within three years.
Velaga indicated that the company is already working with several major cloud operators for pilot installations. “Our customers want scalability, performance, and openness,” he said. “Thor Ultra was designed with those needs in mind.”
The launch also comes at a time when geopolitical tensions and chip export restrictions have complicated global semiconductor supply chains.
Broadcom executives said the company’s diversified manufacturing base including facilities in Taiwan and Singapore will help mitigate risks.
As AI workloads continue to surge, the battle between Broadcom and Nvidia is expanding beyond GPUs into the intricate web of networking technology that binds massive AI systems together.
With the Thor Ultra, Broadcom is positioning itself not only as a supplier of high speed connectivity but as a key architect of the infrastructure powering the AI era.
Whether that challenge will meaningfully erode Nvidia’s dominance remains to be seen, but analysts agree the competition will likely drive faster innovation and potentially reshape the economics of large scale AI computing.