X

Cornelis CN5000 Launches: Rewiring AI and HPC Networking at Scale

June 3, 2025

Cornelis Networks today announced the launch of its CN5000 family – a high-performance, scale-out networking solution engineered to meet the rising demands of AI and high-performance computing (HPC). Built on the company’s proprietary Omni-Path® architecture, CN5000 delivers a bold promise: up to 30% higher HPC application performance, double the message rate of current solutions, and six times faster collective communication for AI workloads.

That’s not just an incremental upgrade – it’s a direct shot across the bow at InfiniBand and Ethernet in the race to support massive, compute-intensive deployments.

CN5000 claims 2x higher message rates, 35% lower latency, and significant speedups in real-world workloads like Ansys Fluent, seismic simulation, and large language model (LLM) training.

Omni-Path Advantage

At the heart of CN5000 is the updated Omni-Path architecture, known for its lossless, congestion-free data flow. It uses credit-based flow control and adaptive routing to keep traffic moving smoothly at scale.

With support for up to 500,000 endpoints, roadmap support to scale into the millions, and seamless integration with CPUs, GPUs, and accelerators from AMD, Intel, and NVIDIA, CN5000 positions itself as vendor-neutral and future-ready.

Product Family Components

CN5000 includes 400G SuperNICs, modular switches (including a 576-port Director-class option), an open-source OPX software suite, and cabling options optimized for dense, scalable deployments.

Whether it's shortening LLM training cycles or accelerating weather modeling, CN5000 is optimized for real-world throughput and scalability – not just theoretical peak speeds.

Strategic Roadmap

Cornelis isn’t stopping at 400G. The company outlined plans for future products:

CN6000 (800G) will blend Omni-Path with RoCE-enabled Ethernet.

CN7000 (1.6T) aims to redefine ultra-scale performance by integrating Ultra Ethernet Consortium standards with Cornelis’ core architecture.

That roadmap signals Cornelis’ ambition not only to compete but to help define the next era of AI and HPC networks.

The TechArena Take

So glad to see you re-enter the arena Cornelis! The CN5000 is the clearest signal yet that the future of AI and HPC networking isn’t just about one vendor. With this delivery Cornelis has placed a spotlight on the dusty technologies being tapped today for AI and HPC clusters and provided an alternative that removes architectural friction and frees up GPUs that are sitting around in idle. That’s right, according to Cornelis estimates, AI GPUs are in idle for most of their cycles which is astounding given their pricetags.  

If performance claims are delivered with this end-to end network solution, a network upgrade will pay for itself with increased GPU compute delivery. We can’t wait to see how this plays out.

Cornelis Networks today announced the launch of its CN5000 family – a high-performance, scale-out networking solution engineered to meet the rising demands of AI and high-performance computing (HPC). Built on the company’s proprietary Omni-Path® architecture, CN5000 delivers a bold promise: up to 30% higher HPC application performance, double the message rate of current solutions, and six times faster collective communication for AI workloads.

That’s not just an incremental upgrade – it’s a direct shot across the bow at InfiniBand and Ethernet in the race to support massive, compute-intensive deployments.

CN5000 claims 2x higher message rates, 35% lower latency, and significant speedups in real-world workloads like Ansys Fluent, seismic simulation, and large language model (LLM) training.

Omni-Path Advantage

At the heart of CN5000 is the updated Omni-Path architecture, known for its lossless, congestion-free data flow. It uses credit-based flow control and adaptive routing to keep traffic moving smoothly at scale.

With support for up to 500,000 endpoints, roadmap support to scale into the millions, and seamless integration with CPUs, GPUs, and accelerators from AMD, Intel, and NVIDIA, CN5000 positions itself as vendor-neutral and future-ready.

Product Family Components

CN5000 includes 400G SuperNICs, modular switches (including a 576-port Director-class option), an open-source OPX software suite, and cabling options optimized for dense, scalable deployments.

Whether it's shortening LLM training cycles or accelerating weather modeling, CN5000 is optimized for real-world throughput and scalability – not just theoretical peak speeds.

Strategic Roadmap

Cornelis isn’t stopping at 400G. The company outlined plans for future products:

CN6000 (800G) will blend Omni-Path with RoCE-enabled Ethernet.

CN7000 (1.6T) aims to redefine ultra-scale performance by integrating Ultra Ethernet Consortium standards with Cornelis’ core architecture.

That roadmap signals Cornelis’ ambition not only to compete but to help define the next era of AI and HPC networks.

The TechArena Take

So glad to see you re-enter the arena Cornelis! The CN5000 is the clearest signal yet that the future of AI and HPC networking isn’t just about one vendor. With this delivery Cornelis has placed a spotlight on the dusty technologies being tapped today for AI and HPC clusters and provided an alternative that removes architectural friction and frees up GPUs that are sitting around in idle. That’s right, according to Cornelis estimates, AI GPUs are in idle for most of their cycles which is astounding given their pricetags.  

If performance claims are delivered with this end-to end network solution, a network upgrade will pay for itself with increased GPU compute delivery. We can’t wait to see how this plays out.

Transcript

Subscribe to TechArena

Subscribe