News

DDN Redefines AI and High-Performing Computing at Scale with Google Cloud Managed Lustre Innovations

Managed Lustre Innovations New Google Cloud Managed Lustre capabilities with DDN ExaScaler improves AI training, inference, and high-performance computing, delivering scale, performance, and economics

Las Vegas, NV — [April 22, 2026] —  DDN, the world’s leading AI data platform provider, today shared groundbreaking innovations involving Google Cloud Managed Lustre, unveiled at Google Cloud Next 2026. Built on DDN’s proven Lustre expertise, EXAScaler, and delivered in collaboration with Google Cloud, these advancements redefine what’s possible for AI training, inference, and high-performance computing (HPC) in the cloud.

With performance scaling to 10 terabytes per second, Google Cloud Managed Lustre delivers improved throughput, elasticity, and cost efficiency—enabling enterprises to run the world’s most demanding AI and HPC workloads. The launch underscores DDN’s vision to power the full AI lifecycle—from training and fine-tuning to inference and large-scale simulation—through a unified, high-performance data platform.

“This is not just a product milestone—it’s a market-shaping moment,” said Alex Bouzari, CEO at DDN. “We are delivering one of the fastest-growing, highest-performance managed Lustre services in the industry, purpose-built for the realities of modern AI at scale. This announcement reinforces DDN’s leadership in AI data platforms and our shared commitment to helping customers innovate faster, at lower cost, and with greater confidence.”

Built for the Next Generation of AI

Google Cloud Managed Lustre provides a POSIX-compliant, parallel file system that delivers high throughput and low latency. Customers across industries—including AI, financial services, robotics, autonomous systems, and advanced research—are rapidly adopting the platform to power:

  • Large-scale LLM training, fine-tuning, and checkpointing
  • High-throughput AI inference, RAG, and KV-cache acceleration
  • Financial modeling, life sciences and HPC workloads
  • Machine vision, multimodal AI, and physical simulations

A key innovation unveiled at Google Cloud Next is the use of Managed Lustre as a shared KV-cache for AI inference, dramatically improving performance and economics. By leveraging Lustre’s ultra-low latency and high aggregate throughput, customers can avoid redundant computation and scale inference across clusters with virtually unlimited shared cache capacity.

In benchmark testing, this approach delivered:

  • Improved total inference throughput by 75%
  • Reduced the mean time to first token by greater than 40% compared to using KV Cache in host memory alone

The result is faster, more responsive AI applications—and significantly lower cost of inference at scale.

A Collaboration Driving Cloud-Scale Performance

For the offering, DDN combines long-standing Lustre expertise and extreme-scale data systems with Google Cloud’s elastic infrastructure, innovations in compute and Hyperdisk, global reach, and access to cutting-edge accelerators, including TPUs.

“Managed Lustre enables us to scale AI model training for AFEELA Intelligent Drive by 3x compared to other Google Cloud solutions,” said Motoi Kataoka, Senior Manager, AI & Data Analytics Platform, Sony Honda Mobility Inc.

New capabilities announced at Google Cloud Next also include a single, dynamic hot and cold tier, designed to deliver high performance for hot data with dramatically improved economics—eliminating the complexity, performance cliffs, and SKU sprawl common in competing tiered storage solutions.

Setting the Pace for the Industry

With rapid customer adoption, explosive capacity growth, and performance milestones, the combination of DDN and Google Cloud Managed Lustre is setting a new benchmark for AI and HPC in the cloud.

“This is what happens when deep infrastructure expertise meets cloud-scale innovation,” said Kirill Tropin, Group Product Manager at Google Cloud. “Our partnership with DDN enables customers to run their most demanding AI workloads with the performance, scale, and simplicity they need—today and into the future.”

To learn more about DDN, please visit ddn.com

About DDN

DDN is the world’s leading provider of AI data storage and data management platforms, powering over 20 years of innovation across HPC, enterprise, and the largest AI deployments on Earth. With its EXA, Infinia, and intelligent data management platforms, DDN delivers unmatched performance, scale, and business value for customers building next-generation AI factories, hyperscale clouds, and Sovereign AI initiatives. DDN is the trusted partner for thousands of the world’s most data-intensive organizations, including the leading national labs, research institutions, enterprises, hyperscalers, financial firms, and autonomous vehicle innovators. For more information, visit www.ddn.com.

Follow DDN: LinkedIn, X, and YouTube.

DDN Media Contact:
Amanda Lee,
VP, Marketing,
Analyst & Public Relations
amlee@ddn.com
727-272-0781