“The test of the machine is the satisfaction it gives you. There isn’t any other test”
― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance
OK, so Pirsig was talking about motorbikes, and I’m talking about HPC solutions, but the test is the same.
DDN’s mission comes down to satisfying, nay delighting our 2500-strong customer base in their highly complex, demanding and evolving requirements. In AI, Big Data and HPC that is no mean feat. We all want radical performance, massively bigger scales, a million times speedups, the ultimate in cost efficiency, freedom from vendor lock ins, and full cloud enablement, but our HPC applications adapt very slowly, and hardware speedups sadly come to a screeching halt when confronted with layers of storage, network, compute and distributed workloads.
We see waves of new software enter the Data at Scale world (Hadoop, Tensorflow, spark, Kubernetes), but we still need to cater for the big, heavy guns in our application army. What’s more we demand that those radical step changes come fully equipped with all the comforts we’re used to: quotas, security, multi-protocol access, and extreme stability and uptimes.
One of DDN’s enduring characteristics for 20 years has been addressing this conundrum. DDN is a real engineering company; the paradox of the largest private storage company in the world accompanied by little fanfare, with hundreds of engineers razor sharp focused on innovating, creating, architecting and delivering, generation after generation, the baddest, fastest and most efficient storage solution at scale the world has ever seen.
Light on marketing and very heavy on R&D, our success over the years is a result of long-term focus and absolute passion and dedication to finding and solving the most complex data at scale requirements in the world, all of it with a regular cadence of breakthrough and radical innovation. The proof of DDN’s success is in our thousands of happy customers, more than a hundred patents in our domain of expertise, and our more than 10 Exabytes of storage delivered to address the toughest challenges in data at scale.
This ISC, we’ll be talking about IME and EXA5, our two flagship software products that transform that art of radical innovation into highly reliable, easy to use production class evolutionary progress which delight our customers with challenging data at scale requirements.
Firstly, we’ll be talking about Infinite Memory Engine. IME is a radical IO engine that happens to be an absolute speed demon. Take a look at our hard write and read results on the IO500 for instance – indisputedly number one.
From AI and Machine Learning to Distributed HPC, FinTech, Healthcare, Autonomous Driving and many other fields, IME clearly does something different to your IO, and that something pays dividends in ways thought to be impossible. That’s simply down to the way we’ve built the end to end data path, removed the bottlenecks of filesystems when presented with random, mal-aligned and shared file IO, created a ground-up distributed namespace, fully optimized it for flash, ran it at full scale both in the cloud and on prem, and totally eliminated your data bottlenecks – and all that with zero need for application changes or user training – what’s not to like?
We’ll also be talking about EXA5. EXA5 is a breakthrough file system solution which is part of our EXAScaler family. Over the past several years we have spent hundreds of engineering man-years to develop radical components in and around EXAScaler to significantly accelerate, enhance and advance our customer’s AI and Multicloud workflows. We’ve also added performance boosting enhancements while making the whole solution much smaller, more robust and significantly simpler to use – And that’s EXA5 in a nutshell.