WHO CAN BENEFIT FROM IME?
In mid-2015, IME is scheduled for General Availability to sites that:
- Have a clustered compute environment and run a parallel file system, such as Lustre® or GPFS™
- Are looking to deploy greenfield compute and/or data storage initiatives OR
- Desire to accelerate their current environment’s I/O delivery and applications with greater cost & operational efficiency than all disk-based or alternative flash approaches
- Have a peak bandwidth requirement from 20GB/s to multiple TB/s
THE CASE FOR IME
Today, IME is currently deployed as testbeds in many of the world’s largest supercomputing centers. From these deployments, we’ve been able to gather a substantial amount of I/O characterization and real world performance data at-scale, proving that IME is a much more performant and efficient way to provision I/O bandwidth and IOPS over exclusively disk-based storage approaches.
Traditionally, storage systems are sized based on anticipated peak performance requirements, not sustained requirements – which drives the need for a ton of additional hardware and CAPEX.
To provision performance with disk, Storage Architects would determine:
- The quantity of storage controllers needed, based on the drive count that saturates the controllers’ performance (not the maximum number of supported drives that would allow them to fully utilize each storage system’s capacity density most efficiently)
- The type of HDDs and the total quantity needed to provide an aggregate performance level (not necessarily the highest capacity drive that would enable the highest space and cost efficiency)
IME Decreases Total Cost of Storage
When provisioning performance with IME, both the economics and buying criteria for disk-based storage arrays become much more cost and space efficient
- Now, architect and purchase arrays based upon sustained performance requirements while utilizing IME’s ultra dense performance packaging to deliver the peak requirements
- Finally, decouple performance and capacity purchases (add performance with IME’s fast NVM burst buffer caching layer and add capacity with disk-based storage arrays)
- Accelerate parallel file systems while increasing their efficiency, as IME aligns writes to remove mal-aligned I/O patterns and utilize full system bandwidth
- Choose disk arrays that provide the highest performance and capacity density and largest number of drives managed per controller to achieve full utilization and space reduction
- Populate each drive slot with the highest capacity drive to consolidate systems, reduce power and floor space
Your compute is a major investment, so maximizing the amount of time spent processing and minimizing latency and idle times are key to delivering faster results and achieving higher ROI.
Today, latency and idle times when using disk-based parallel file systems are associated with:
- Lengthy Checkpoint/Restart operations that consume minutes/day that become hours/week and weeks/year that could be returned to application computation
- Mal-aligned I/O (random small and large blocks) from applications and POSIX locking semantics bring parallel file system performance to it’s knees, slowing down your entire cluster
When sizing compute hardware, System Architects had to consider lengthy checkpoint downtime and the negative impact of slower performing applications when determining how much processing power would be required.
IME Increases Compute Availability for Data Processing
Adding IME’s fast data caching tier between your compute and parallel filesystem unlocks new capabilities and efficiencies:
- Accelerates Checkpoint/Restart by 10X or more, returning weeks-months per year in processing time
- Speeds-up applications by removing latency and the negative impact to your cluster caused mal-aligned applications and POSIX semantics that slow down both parallel file systems and compute resources
- Increases the ability to run jobs faster and more jobs in parallel, extending your capabilities without purchasing additional compute nodes
FULFILLS THE PROMISE OF SOFTWARE-DEFINED STORAGE
A main differentiator of IME is its openness. Our software-based approach provides much greater flexibility in how you architect your environment today and for tomorrow’s changing requirements.
IME can work with any brand of compute node or allows you the freedom to choose commodity components
Flash Form Factor-Agnostic
Utilize any physical form factor,
such as SAS or SATA SSDs,
This fast data tier can sit in front of your DDN storage arrays or can connect to another vendor’s parallel file system you may already have installedApplication-Agnostic
IME is recognized as a mount-point
to MPIIO and POSIX applications without any modifications required
Connect IME to standard networking protocols, such as InfiniBand or even proprietary ones like Cray® Aries™
Available as software only or as an optimized appliance to align with End Users, DIY Customers and Partners
After years of R&D, and working closely with the world’s largest supercomputing centers and commercial HPC application users, IME’s capabilities are highly evolved to deliver efficiencies beyond other cache buffering and SSD/Flash offerings. IME’s intelligent software layer enables a wider set of use cases and more efficiencies to align with the requirements of real world applications and environments.
1000X Application Acceleration
Run more complex simulations faster with less hardware
Scale Out Data Protection
Distributed Erasure Coding
Full POSIX & HPC Compatible
No Application Modifications
Scales Memory to 100s of TB
To move large datasets out of storage and into memory extremely fast, without storage latency
Integrated With File Systems
Designed to accelerate Lustre®, GPFS™
No code Modification Needed
Write anywhere, no layout needed
50% Less Latency Than All Flash Arrays
Optimizing workload performance to reduce time to insight and discovery
Designed for Scalability
Patented DDN Algorithms
80% Lower Cost
Infinite scalability with the highest efficiency to provision I/O performance with the highest efficiency
Writes Fast; Read Fast Too
No other system offers both at scale
Because IME hasn’t been formally launched yet, obtaining detailed product information and specifications are only available through your DDN Sales Representative under NDA. As we get closer to General Availability in Mid-2015, we’ll share more details on this website.
What we can share today is that IME will be made available as:
- An Embedded Hardware Appliance – To enable rapid, turn-key deployment OR
- Software Only – To align with DIY customers and DDN Technology and Channel Partners
To receive an IME Product Briefing, please contact email@example.com
IME TESTBED PROGRAM
Today, IME is currently deployed worldwide in a number of strategic customer and partner sites that desire an advanced preview of the technology for planning future projects. These organizations are taking an active role in benchmarking and validating real world performance with their application sets and providing valuable feedback to ensure IME will provide proven capabilities and value when it is made generally available in mid 2015.
If you would like to be considered to participate in the IME Testbed Program, firstname.lastname@example.org