Discussions regarding performance get complicated very quickly. They usually arise from the cumbersome yet apt question around next generation systems: “Will my workflow efficiency with respect to cost/performance improve enough to warrant the overhead of change?” The answer is rarely black and white, “yes” or “no.” Most often the answer is: “it depends.” Yet, for some reason, year after year, customer after customer, achieve consistent improvement in application performance and reductions in operating costs from DDN systems.

For 20 years DDN has been known for building and delivering the fastest and most scalable storage systems in the world. But what “fast” means has changed a lot throughout that 20 years. Just in the past few years we’ve seen Flash, MLC, SCM, 100G Infiniband, NVMe, Super-fast GPUs, and a new class of CPUs, Public Cloud and Hybrid Cloud come into the mix.

While device performance characteristics often show an order of magnitude gain from generation to generation, those improvements typically do not materialize at the user level. This creates frustration and confusion, but most often is the result of massive system inefficiencies. DDN’s technological and engineering ability has always been to turn raw device and protocol performance into real-world system, application and workload speedup, in the most efficient way possible, and at any scale.

DDN takes efficiency at scale in the modern world of new networking protocols, GPUs, CPUs and multicloud to a whole other level of performance with the EXAScaler® parallel file system architecture, as well as IME, a flash optimized accelerator for existing filesystems. Both technologies reach deep into the storage, aggregating performance for full efficiency, flashes across network nodes, and presents the storage capability and content directly into the compute node and to the application, with seamless low-latency delivery.

The software is fully distributed and resides everywhere. This enables fully aggregated scaling across the whole environment no matter the scale, delivering raw storage performance directly to the application. The extreme performance benefits are the result of the frictionless end-to-end data path optimization.

It is not just the storage devices or platforms which are being optimized. Full optimization extends through the network by making use of RDMA-combined multirail data routing, into the client by leveraging optimization on processor and NUMA level, integration with GPUs, multicloud delivery, and the processing and exposure to the application. DDN isn’t just a storage company–it’s an IO company. Where other storage providers are committed until the box meets the network, DDN takes the whole system into account from device to storage system, to network, and to compute to application.

For two decades, DDN has been successfully building and implementing the fastest parallel filesystem architectures and solutions in the world for thousands of demanding customers. With hundreds of engineers, trained and experienced in AI, Big Data and HPC workloads and applications, our understanding of end-to-end data path optimization in highly complex environments is deeper and broader than anyone else in the industry.

DDN engineers and field application engineers live and breathe performance at scale, and the translation of storage performance to real-world end-to-end workflow efficiency. After thousands of successful deployments with scale and complexity, we have learned that storage matters, the network matters, GPUs and CPUs matter, protocols matter, multicloud matters, IO calls matter—and most of all, people matter. DDN gets it and that’s why we’ll always be the fastest.

Come and hear me talk on performance and other topics at the DDN User Group meeting at ISC19. Register here to reserve your spot.

  • Sven Oehme
  • Sven Oehme
  • Chief Research Officer Data at Scale
  • Date: June 6, 2019