Infinite Memory Engine™ (IME)


IME unleashes a new I/O provisioning paradigm. This breakthrough, software defined storage application introduces a whole new new tier of transparent, extendable, non-volatile memory (NVM), that provides game-changing latency reduction and greater bandwidth and IOPS performance for the next generation of performance hungry scientific, analytic and big data applications – all while offering significantly greater economic and operational efficiency than today’s traditional disk-based and all flash array storage approaches that are currently used to scale performance.

The results:
-Up to 1000x acceleration (with no application modification)
-Up to 30% more data processing availability
-Up to 70% less hardware needed to reach performance requirements, lowering the total cost of storage



Use Cases that benefit from IME, include:
  • Massive Streaming I/O
    • Checkpoint/Restart
  • Multiple Applications Reading, Writing, Sharing Common Datasets in Real-time
    • Ensembles
    • Visualization
  • Real-time Data Alignment
    • Workflow Processing
    • Manipulation
    • Analysis

In mid-2015, IME is scheduled for General Availability to sites that:
  1. Have a clustered compute environment and run a parallel file system, such as Lustre® or GPFS™
  2. Are looking to deploy greenfield compute and/or data storage initiatives OR
  3. Desire to accelerate their current environment’s I/O delivery and applications with greater cost & operational efficiency than all disk-based or alternative flash approaches
  4. Have a peak bandwidth requirement from 20GB/s to multiple TB/s


Today, IME is currently deployed as testbeds in many of the world’s largest supercomputing centers. From these deployments, we’ve been able to gather a substantial amount of I/O characterization and real world performance data at-scale, proving that IME is a much more performant and efficient way to provision I/O bandwidth and IOPS over exclusively disk-based storage approaches.

From a Storage Perspective

Traditionally, storage systems are sized based on anticipated peak performance requirements, not sustained requirements – which drives the need for a ton of additional hardware and CAPEX.

To provision performance with disk, Storage Architects would determine:

  • The quantity of storage controllers needed, based on the drive count that saturates the controllers’ performance (not the maximum number of supported drives that would allow them to fully utilize each storage system’s capacity density most efficiently)
  • The type of HDDs and the total quantity needed to provide an aggregate performance level (not necessarily the highest capacity drive that would enable the highest space and cost efficiency)


Use Cases that benefit from IME, include:
  • 99% of the time...Bandwidth Utilization is less
    than 33% of Max
  • 70% of the time...Bandwidth Utilization is less
    than 5% of Max
IME absorbs peak requirements, finally decoupling performance from capacity and reducing storage hardware by up to 70%

IME Decreases Total Cost of Storage

When provisioning performance with IME, both the economics and buying criteria for disk-based storage arrays become much more cost and space efficient

  • Now, architect and purchase arrays based upon sustained performance requirements while utilizing IME’s ultra dense performance packaging to deliver the peak requirements
  • Finally, decouple performance and capacity purchases (add performance with IME’s fast NVM burst buffer caching layer and add capacity with disk-based storage arrays)
  • Accelerate parallel file systems while increasing their efficiency, as IME aligns writes to remove mal-aligned I/O patterns and utilize full system bandwidth
  • Choose disk arrays that provide the highest performance and capacity density and largest number of drives managed per controller to achieve full utilization and space reduction
  • Populate each drive slot with the highest capacity drive to consolidate systems, reduce power and floor space

From a Compute Perspective

Your compute is a major investment, so maximizing the amount of time spent processing and minimizing latency and idle times are key to delivering faster results and achieving higher ROI.

Today, latency and idle times when using disk-based parallel file systems are associated with:

  • Lengthy Checkpoint/Restart operations that consume minutes/day that become hours/week and weeks/year that could be returned to application computation
  • Mal-aligned I/O (random small and large blocks) from applications and POSIX locking semantics bring parallel file system performance to it’s knees, slowing down your entire cluster

When sizing compute hardware, System Architects had to consider lengthy checkpoint downtime and the negative impact of slower performing applications when determining how much processing power would be required.


  • Checkpoint/Restart on HDD systems are at least 10X slower than on IME
  • Malaligned I/O can slow down a parallel file system by 120X. Using IME, these applications can accelerate by 3200X, as data is dynamically
    buffered & aligned

IME Increases Compute Availability for Data Processing

Adding IME’s fast data caching tier between your compute and parallel filesystem unlocks new capabilities and efficiencies:

  • Accelerates Checkpoint/Restart by 10X or more, returning weeks-months per year in processing time
  • Speeds-up applications by removing latency and the negative impact to your cluster caused mal-aligned applications and POSIX semantics that slow down both parallel file systems and compute resources
  • Increases the ability to run jobs faster and more jobs in parallel, extending your capabilities without purchasing additional compute nodes


A main differentiator of IME is its openness. Our software-based approach provides much greater flexibility in how you architect your environment today and for tomorrow’s changing requirements.

Compute Vendor-Agnostic

IME can work with any brand of compute node or allows you the freedom to choose commodity components

Flash Form Factor-Agnostic

Utilize any physical form factor,
such as SAS or SATA SSDs,
PCIe/NVMe, etc.

Storage Vendor-Agnostic

This fast data tier can sit in front of your DDN storage arrays or can connect to another vendor’s parallel file system you may already have installed


IME is recognized as a mount-point
to MPIIO and POSIX applications without any modifications required


Connect IME to standard networking protocols, such as InfiniBand or even proprietary ones like Cray® Aries™


Available as software only or as an optimized appliance to align with End Users, DIY Customers and Partners



After years of R&D, and working closely with the world’s largest supercomputing centers and commercial HPC application users, IME’s capabilities are highly evolved to deliver efficiencies beyond other cache buffering and SSD/Flash offerings. IME’s intelligent software layer enables a wider set of use cases and more efficiencies to align with the requirements of real world applications and environments.


1000X Application Acceleration
Run more complex simulations faster with less hardware


Scale Out Data Protection
Distributed Erasure Coding


Full POSIX & HPC Compatible
No Application Modifications


Scales Memory to 100s of TB
To move large datasets out of storage and into memory extremely fast, without storage latency


Integrated With File Systems
Designed to accelerate Lustre®, GPFS™
No code Modification Needed


Non-Deterministic System
Write anywhere, no layout needed


50% Less Latency Than All Flash Arrays
Optimizing workload performance to reduce time to insight and discovery


Designed for Scalability
Patented DDN Algorithms


80% Lower Cost
Infinite scalability with the highest efficiency to provision I/O performance with the highest efficiency


Writes Fast; Read Fast Too
No other system offers both at scale


Because IME hasn’t been formally launched yet, obtaining detailed product information and specifications are only available through your DDN Sales Representative under NDA. As we get closer to General Availability in Mid-2015, we’ll share more details on this website.


What we can share today is that IME will be made available as:

  1. An Embedded Hardware Appliance – To enable rapid, turn-key deployment
  2. OR
  3. Software Only – To align with DIY customers and DDN Technology and Channel Partners
  4. To receive an IME Product Briefing, please contact sales@ddn.com


Today, IME is currently deployed worldwide in a number of strategic customer and partner sites that desire an advanced preview of the technology for planning future projects. These organizations are taking an active role in benchmarking and validating real world performance with their application sets and providing valuable feedback to ensure IME will provide proven capabilities and value when it is made generally available in mid 2015.

If you would like to be considered to participate in the IME Testbed Program, sales@ddn.com