Introduction to AI Data Centers
Meta and Oracle are upgrading their AI data centers with NVIDIA’s Spectrum-X Ethernet networking switches. This technology is designed to handle the growing demands of large-scale AI systems. Both companies are adopting Spectrum-X as part of an open networking framework to improve AI training efficiency and accelerate deployment across massive compute clusters.
The Role of Spectrum-X in AI Data Centers
Jensen Huang, NVIDIA’s founder and CEO, said that trillion-parameter models are transforming data centers into "giga-scale AI factories." Spectrum-X acts as the "nervous system" connecting millions of GPUs to train the largest models ever built. Oracle plans to use Spectrum-X Ethernet with its Vera Rubin architecture to build large-scale AI factories. Mahesh Thiagarajan, Oracle Cloud Infrastructure’s executive vice president, said that the new setup will allow the company to connect millions of GPUs more efficiently, helping customers train and deploy new AI models faster.
Building Flexible AI Systems
According to Joe DeLaere, who leads NVIDIA’s Accelerated Computing Solution Portfolio for Data Center, flexibility is key as data centers grow more complex. NVIDIA’s MGX system offers a modular, building-block design that lets partners combine different CPUs, GPUs, storage, and networking components as needed. The system promotes interoperability, allowing organizations to use the same design across multiple generations of hardware.
Scaling Up, Out, and Across
NVIDIA’s MGX system plays a role in how data centers are scaled. Gilad Shainer, the company’s senior vice president of networking, said that MGX racks host both compute and switching components, supporting NVLink for scale-up connectivity and Spectrum-X Ethernet for scale-out growth. MGX can connect multiple AI data centers together as a unified system, enabling high-speed connections across regions.
Expanding the AI Ecosystem
NVIDIA sees Spectrum-X as a way to make AI infrastructure more efficient and accessible across different scales. Shainer said that the Ethernet platform was designed specifically for AI workloads like training and inference, offering up to 95 percent effective bandwidth and outperforming traditional Ethernet by a wide margin. NVIDIA’s partnerships with companies such as Cisco, xAI, Meta, and Oracle Cloud Infrastructure are helping to bring Spectrum-X to a broader range of environments.
Preparing for Vera Rubin and Beyond
DeLaere said that NVIDIA’s upcoming Vera Rubin architecture is expected to be commercially available in the second half of 2026, with the Rubin CPX product arriving by year’s end. Both will work alongside Spectrum-X networking and MGX systems to support the next generation of AI factories. Spectrum-X and XGS share the same core hardware but use different algorithms for varying distances, minimizing latency and allowing multiple sites to operate together as a single large AI supercomputer.
Collaborating Across the Power Chain
To support the 800-volt DC transition, NVIDIA is working with partners from chip level to grid. The company is collaborating with Onsemi and Infineon on power components, with Delta, Flex, and Lite-On at the rack level, and with Schneider Electric and Siemens on data center designs. A technical white paper detailing this approach will be released at the OCP Summit.
Performance Advantages for Hyperscalers
Spectrum-X Ethernet was built specifically for distributed computing and AI workloads. Shainer said it offers adaptive routing and telemetry-based congestion control to eliminate network hotspots and deliver stable performance. These features enable higher training and inference speeds while allowing multiple workloads to run simultaneously without interference.
Hardware and Software Working Together
While NVIDIA’s focus is often on hardware, DeLaere said that software optimization is equally important. The company continues to improve performance through co-design, aligning hardware and software development to maximize efficiency for AI systems. NVIDIA is investing in FP4 kernels, frameworks such as Dynamo and TensorRT-LLM, and algorithms like speculative decoding to improve throughput and AI model performance.
Networking for the Trillion-Parameter Era
The Spectrum-X platform is NVIDIA’s first Ethernet system purpose-built for AI workloads. It’s designed to link millions of GPUs efficiently while maintaining predictable performance across AI data centers. With congestion-control technology achieving up to 95 percent data throughput, Spectrum-X marks a major leap over standard Ethernet.
Conclusion
In conclusion, NVIDIA’s Spectrum-X Ethernet networking switches are revolutionizing the way AI data centers operate. With its ability to handle large-scale AI systems, Spectrum-X is becoming a crucial component in the development of AI infrastructure. As AI models continue to grow in size and complexity, the need for efficient and scalable networking solutions will only continue to increase.
FAQs
Q: What is Spectrum-X Ethernet?
A: Spectrum-X Ethernet is a networking technology designed specifically for AI workloads, offering up to 95 percent effective bandwidth and outperforming traditional Ethernet by a wide margin.
Q: How does Spectrum-X Ethernet improve AI training efficiency?
A: Spectrum-X Ethernet improves AI training efficiency by eliminating network hotspots and delivering stable performance, enabling higher training and inference speeds while allowing multiple workloads to run simultaneously without interference.
Q: What is the role of MGX system in AI data centers?
A: The MGX system plays a role in how data centers are scaled, hosting both compute and switching components, supporting NVLink for scale-up connectivity and Spectrum-X Ethernet for scale-out growth.
Q: What is the expected release date of NVIDIA’s upcoming Vera Rubin architecture?
A: The Vera Rubin architecture is expected to be commercially available in the second half of 2026, with the Rubin CPX product arriving by year’s end.