In a major announcement at the CES 2026 trade show in Las Vegas, Nvidia Corporation's CEO Jensen Huang revealed that the company's next-generation artificial intelligence data centre platform, named Rubin, has officially entered the production phase. This move signals Nvidia's aggressive push to maintain its dominance in the fiercely competitive and rapidly expanding AI hardware market.
Rubin Platform: Performance Leap and Timeline
Jensen Huang confirmed during his keynote address that all six chips constituting the Rubin computing platform have been received from manufacturing partners. The company is on track to begin customer deployments in the second half of 2025. Huang attributed this accelerated timeline to the exploding demand for advanced computing power, driven by increasingly complex and widely adopted AI software that is overwhelming existing data centre infrastructure.
The Rubin platform, named after astronomer Vera Rubin, is touted to deliver a monumental performance upgrade over its predecessor, Blackwell. Nvidia claims the new AI accelerator offers 3.5 times better performance for AI training tasks and up to five times better performance for running AI models (inference). The accompanying central processing unit (CPU) features 88 cores and is said to provide double the performance of the model it replaces. A significant benefit for operators is the reduced operational cost, as Rubin-based systems can achieve similar results using fewer components.
Strategic Early Disclosure and Market Dynamics
Notably, Nvidia has disclosed details about Rubin much earlier than its typical schedule, which usually involves unveiling major hardware at its spring GTC conference. This strategic shift appears designed to keep the industry's focus locked on Nvidia's roadmap amidst intensifying competition. Despite highlighting its future products, the company stressed that demand for its current platforms remains exceptionally strong.
On the sensitive topic of the Chinese market, Nvidia reported continued robust interest from Chinese customers for its H200 chip. The company's Chief Financial Officer, Colette Kress, stated that Nvidia has sufficient supply to meet Chinese demand without disrupting deliveries to other regions, regardless of the outcome of ongoing U.S. license reviews under the Trump administration. However, final deployment would also require approval from Chinese authorities for the US-made chips.
First Adopters and Expansion into New Frontiers
The Rubin hardware will be available both as part of Nvidia's integrated DGX SuperPod supercomputer systems and as standalone products for modular deployments. Major cloud providers like Microsoft are expected to be among the first to roll out the new technology later this year. This aligns with the current market where a significant portion of spending on Nvidia systems comes from a handful of large cloud players including Microsoft, Google Cloud, and Amazon Web Services.
Beyond data centres, Nvidia used the CES stage to expand its footprint in autonomous vehicles and robotics. The company introduced a new platform called Alpamayo, designed to help vehicles reason through complex real-world scenarios. This model, which can be retrained by users, aims to enable autonomous systems to handle unexpected situations like infrastructure failures. Building on existing partnerships, such as the one with Mercedes-Benz, Nvidia confirmed the first Nvidia-powered car is scheduled to hit U.S. roads in the first quarter of 2025.