As the speed and scale of environments and applications grow, the negative effects of I/O contention and latency are magnified. In HPC, the parallel fi le system bottleneck has been so significant, — it has prevented clustered compute environments from being able to realize the significant acceleration that wide deployments of SSDs have provided most other workloads. Because of that, spinning disk drives remain the primary way HPC sites provision the massive amounts of bandwidth and IOPS performance required by performance-intensive applications. This approach is highly inefficient, often resulting in an overprovisioning of capacity as well as hardware sprawl.
Additionally, there’s a number of other factors that can bring the performance of entire compute clusters and their parallel file systems to their knees: mixed I/O, fragmented data patterns of “problem applications,” and large datasets that force applications to run out of core.
IME DELIVERS A GAME CHANGING, TECHNOLOGICAL BREAKTHROUGH
To combat these challenges, DDN’s IME14K revolutionizes how information is saved and accessed by compute. IME software allows data to reside next to compute in a very fast, shared pool of non-volatile memory (NVM).
This new data adjacency significantly reduces latency by allowing IME software’s revolutionary, fast data communication layer to pass data without the file locking contention inherent in today’s parallel file systems.
To ease integration, IME utilizes common protocols, making it transparent to both applications and the parallel filesystem. Therefore, IME requires no code modifications.
IME IS A POWERFUL BURST BUFFER AND A MULTI-DIMENSIONAL SOLUTION, DESIGNED TO DELIVER NEW ACCELERATION & EFFICIENCIES
With IME14K, large scale, data-intensive environments now have the tools to leverage untapped compute cycles and are able to run more applications. This increase of effi ciency and performance is achieved by minimizing I/O bottlenecks and accelerating applications and file systems; all while reducing the erratic performance often found within storage clusters.
- A Write Accelerating Burst Buffer absorbing the bulk application data into the IME14K NVMe solid state cache significantly faster than the fi le system can absorb it
- A File System Accelerator and Application Optimizer as IME reorders application I/O to optimize fl ushing the cache to long term storage (enabling purchasing as little expensive cache possible).
- A Read-optimized Application-I/O Accelerator that enables out-of-band API confi guration of the IME appliance to optimize both reads and writes, allowing more simultaneous job runs, shortening the job queue and enabling signifi cantly faster application run time to the user. The API integrates IME with the job schedulers and pre-stages / warms the cache for new jobs, accelerating first read.
ZERO-COMPROMISE, PERFORMANCE-FIRST DESIGN PHILOSOPHY
The IME14K is built upon our newest high-performance, hyper-converged DDN14K™ hardware platform, which utilizes the industry’s newest components to maximize I/O performance.
The result is high-speed tiering with non-volatile local and distributed memory pools, leveraging new multi-core processors, interconnects and memory technology. The IME14K also incorporates the industry’s most extensive collection of PCIe fabric with a non-blocking architecture that is purpose-built to accelerate I/O and to maximize the performance acceleration of both the IME intelligent software and NVMe SSDs.
This highly innovative solution is designed to achieve significantly higher efficiency from applications, workfl ows, and computing environments by intelligent and predictive tie-ins to job scheduling services. Each IME14K can be configured to start with a small number of NVMe drives and scale up to 48NVMe SSDs, delivering an expected performance of starting at 10GB/sec for entry performance and scaling up to 50GB/sec in each 4U appliance. Scale multiple appliances to provide multiple TB/s of bandwidth to the most data intensive workloads on the globe.
3 POWERFUL, BUT SIMPLE BENEFITS
Speeds Time to Results by accelerating workflows, applications and bottlenecked I/O
Increases Compute ROI by eliminating I/O wait times that can now be utilized for more computation/p>
Eases Data Growth by reducing traditional storage hardware