Powering the AI Factory Era with Data Intelligence
At SC25, DDN is unveiling the next evolution of HPC and AI infrastructure where data intelligence becomes the engine that powers every breakthrough. For more than two decades, DDN has been the foundation beneath the world’s most demanding HPC and AI workloads. Now, that expertise is reshaping how industries, enterprises, and nations build intelligent systems at scale.
As HPC and AI converge, one truth has become clear: the challenge isn’t compute, it’s data. Traditional architectures leave GPUs waiting on information, driving up costs and power consumption. In fact, global data-center energy demand is on track to double by 2030, surpassing 1,000 TWh per year, roughly the annual use of the UK or Canada. DDN’s unified data platform eliminates those bottlenecks, keeping every GPU fed and every workload productive while reducing power, cooling and data center floor space by 40%.
“We are to data what NVIDIA is to compute,” said DDN CEO Alex Bouzari. “Together, we are building the intelligent foundation of the AI Factory era.”
Unifying AI and HPC with DDN CORE™
At the center of DDN’s showcase is DDN CORE™, the unified data engine for HPC and AI. It unites DDN’s proven EXAScaler® and Infinia technologies and brings file and object access into a single, software-defined architecture, accelerating the entire data lifecycle from training to inference. DDN CORE offers intelligent tiering across NVMe-HDD-Cloud storage and hybrid flexibility across on-premises, edge, and cloud environments, providing one intelligent, software-defined foundation wherever you run HPC and AI. workloads
DDN CORE provides massive business value to HPC and AI customers, including:
- Unified view of data: A single source of truth across HPC and AI, radically simplifying data access, governance, and sharing between teams, clouds, and regions.
- Extreme throughput for every workload: Delivers 15× faster checkpointing and 4× higher ingest throughput under mixed workloads, keeping GPUs saturated, cutting model training and simulation times from weeks to days, translating directly into faster innovation and revenue velocity.
- Autonomous data intelligence: Built-in observability, intelligent data placement, and self-healing automation ensures zero wasted cycles and no manual tuning.
- Run anywhere, without compromise: Use the same high-speed data plane across hybrid and multi-cloud environments, validated on Google Cloud, Oracle Cloud Infrastructure (OCI), CoreWeave, and over 40 AI cloud providers across the globe.
- Unmatched efficiency at scale: Proven across more than a 1,000,000 GPUs in production, achieving up to 99% GPU utilization in NVIDA DGX SuperPODs and sovereign AI factories.
And debuting at SC25, the AI400X3i, DDN’s next generation high-performance storage appliance demonstrates that innovation in hardware and software go hand in hand. Purpose-built for AI and HPC workloads with DDN EXAScalerÒ, the AI400X3i delivers:
- 140 GB/s read throughput and 110 GB/s write throughput in just 2U of rack space
- 4 million IOPS per node and up to 70 million IOPS in a single rack
- 40% data-center savings in power, cooling, and space
The result is predictable, high-speed performance that turns every watt and terabyte into measurable business value.
Accelerating Enterprise AI Readiness with AI FASTTRACK™
Most enterprises know what they want from AI but are slowed by complexity, integration time, and unpredictable costs. DDN AI FASTTRACK™ changes that equation. Through a single program, consisting of a suite of DDN offerings, customers can deploy validated, production-ready AI environments in days, not months:
- DDN Enterprise AI HyperPOD™: A turnkey, pre-integrated system built by Supermicro that combines DDN CORE, NVIDIA compute, and NVIDIA AI Enterprise software.
- cloud.ddn.com: Unified portal for launching certified DDN environments.
- Ignite AI: Converts existing EXAScaler HPC clusters into AI pipelines within weeks.
- Google Cloud Managed Lustre: Powered by DDN EXAScaler, now GA and NVIDIA-validated.
- DDN Infinia on Oracle Cloud (OCI): Elastic object storage for analytics and inference.
AI FASTTRACK brings turnkey deployment experience, hybrid cloud portability, expert guidance, and consistent data semantics across every environment to give enterprises a faster path to measurable AI ROI, providing business value such as:
- Instant AI infrastructure: Deploy HyperPOD in days, not weeks or launch NVIDIA-certified DDN environments directly from cloud.ddn.com — no integration cycles, no manual setup.
- Accelerate inference and RAG performance: Use HyperPOD to deploy production-grade inference and retrieval-augmented generation (RAG) pipelines delivering 22× faster RAG pipelines, 18× faster inference, and 10× higher efficiency.
- Unmatched speed and scale in the cloud: Get all the performance, efficiency, and data intelligence of DDN CORE, now delivered through your preferred cloud or NCP. Run faster, scale further, and power your AI workloads with the same DDN architecture trusted by thousands of customers.
- Freedom of choice: Deploy and operate AI workloads anywhere — on-prem, cloud, or hybrid — with full compatibility and identical performance across environments.
You choose the platform; DDN delivers the experience. - Faster path to AI ROI: AI Clouds powered by DDN enable you to focus on models, data, and outcomes — not systems. With fully validated, orchestrated, and scalable infrastructure handled by your cloud provider, you get instant access to AI power with the reliability, performance, and efficiency of DDN so your teams move from idea to impact faster than ever.
DDN FASTTRACK enables organizations to deploy predictable, high-performance AI at any scale – anywhere.
Building AI Factories and Sovereign AI Platforms
DDN technology underpins the world’s most efficient and secure AI Factories, from commercial hyperscalers to national research programs. Each system is validated with NVIDIA AI Factory reference architectures for day-one readiness at scale.
- Sovereign AI Blueprints: Co-developed with NVIDIA and global integrators such as Accenture and Deloitte, aligned with FedRAMP, GDPR, and NIST standards and regional mandates.
- AI Factory Reference Designs: Modular systems integrating DDN storage, NVIDIA compute, and next-generation fabrics.
- Sustainable Performance: Tokens-per-Watt metric sets a new benchmark for energy efficiency, achieving up to 40 percent power reduction and 2× service capacity per rack.
From Yotta Shakti Cloud (India) — running 8,000 NVIDIA B200 GPUs on DDN AI400X3 systems — to large national AI deployments at Singtel and HUMAIM, organizations are building sovereign, efficient, and scalable AI platforms on DDN.
Why DDN’s Architecture is Unrivaled
With over 25 years of experience, DDN has the most trusted, validated, and proven data platform on the planet. We accelerate HPC and AI for many reasons, including key components of our technology such as:
- Parallelism at Every Layer: Fully distributed metadata and I/O services ensure linear scalability from tens to thousands of GPUs.
- Multi-Tenancy and Security: Native isolation, per-tenant encryption, and KMIP-integrated key management for sovereign and multi-cloud environments.
- KV-Cache Acceleration: Direct object streaming and persistent cache reduce LLM inference latency by 30–50 % on NVIDIA TensorRT and Triton Inference Server.
- GPU-Aware Data Path: Zero-copy RDMA data transfers between DDN and NVIDIA BlueField DPUs minimize host overhead and maximize tokens/sec.
- Predictive Reliability: AI-driven self-healing via DDN Insight eliminates downtime during peak utilization.
- Open, Portable, Flexible: Can run on commodity hardware from Dell, HPE, Lenovo, Supermicro, ASUS, and more — deployable in any environment.
Whether your use case is HPC, AI, or both, our technology separates us from the rest. That’s why DDN supercharges over 1,000,000 GPUs for over 11,000 customers.
Live at SC25
The DDN Beyond Artificial Data Summit at SC25 convenes CIOs and I&O leaders, AI/ML architects, HPC admins, data scientists, sovereign-cloud decision makers, and key partners from the ecosystem. Led by DDN co-founders Alex Bouzari and Paul Bloch, DDN leaders, amazing customers, and guest speakers from NVIDIA, Supermicro, and Google Cloud, the event explores HPC, AI Factories and Enterprise AI, Sovereign AI strategies, multi-tenancy and governance, and hybrid/on-prem operating models. Attendees leave with a clear blueprint for turning HPC and AI data infrastructure into durable business advantage.
Visitors to the DDN booth #1527 will experience AI infrastructure performance like never before:
- Customer and Partner Spotlights: NVIDIA, Guardant Health, CINECA, Google Cloud, Supermicro, and others showcasing results from DDN-powered data infrastructure.
- See the new AI400X3i and how it integrates in NVIDIA AI Factories.
- Live demos of RAG pipeline optimization, new financial service models, and KV Cache on EXAScaler.
- Unified CORE pipeline demo: EXAScaler for training + Infinia for inference on NVIDIA.
- Turning Infrastructure Cost into Intelligence ROI
Every DDN deployment converts infrastructure expense into productivity. On average, customers realize:
- 2× faster time-to-value
- 40 percent lower power and cooling costs
- Average payback under 12 months
- $6–8 million annual energy savings per 10 MW AI factory
From CERN and NASA to NVIDIA and xAI, DDN powers the world’s most ambitious HPC and AI projects.
At SC25, the company is once again setting the standard for performance, efficiency, and trust in AI data infrastructure. AI breakthroughs don’t happen without the right foundation, and that foundation runs on DDN.