BREAK FREE FROM THE CHALLENGES & INEFFICIENCIES CAUSED BY I/O BOTTLENECKS
As high performance environments grow they put increasing pressure on their I/O infrastructure. Even when processing small amounts of data, a parallel file system can be absolutely crippled by different types of I/O access.
The new approach to solving file system bottlenecks in performance sensitive environments is to add a layer of flash. How this is implemented makes all the difference.
DDN’s Infinite Memory Engine (IME) is a flash-native data cache that scales out to cost-effectively solve I/O scale and bottleneck challenges. Available as software-only, commodity-based appliance or custom appliance, IME is unlike traditional flash caches and flash tiers resident in servers or storage, IME can both accept I/O at the rate compute creates it and optimize it for file system ingest. The result is predictable, fast application performance with 1/10th to 1/100th the storage hardware.
IME Customers Achieve the same performance as the fastest file systems in the world, but use only 1/10th to 1/100th of the hardware for huge space and economic advantages.
|1.5TB/s File System Built On DDN’s IME Burst Buffer Powers Japan’s Fastest Supercomputer
Professor Osamu Tatebe, Ph.D presents the Oakforest-PACS 25 Pflops System at SC16.
IME Public Customers and Test Beds
IME is also unique in that it is architected for new media and has the data protection and availability features typical of a mature, stand-alone storage solution. Today, IME is primarily used for predictable job completion, out of core data, application acceleration and as a HPC burst buffer.
IME: SCALE-OUT, FLASH-NATIVE DATA CACHE
- Read AND Write Performance optimization
- Small, random I/O optimizations
- Shared (many-to-one) file optimizations
- Improved SSD lifetime
- Erasure Coding
- Extreme rebuild speeds, performance protection
- Back-end Parallel File System IO optimization
- No Application Changes – POSIX, MPI-IO and API
IME is a powerful burst buffer and a multi-dimensional solution, designed to deliver new acceleration and efficiencies
With IME, large scale, data-intensive environments now have the tools to leverage untapped compute cycles and are able to run more applications. This increase of efficiency and performance is achieved by minimizing I/O bottlenecks and accelerating applications and file systems; all while reducing the erratic performance often found within storage clusters.
A Write Accelerating Burst Buffer absorbing the bulk application data into the IME14K NVMe solid state cache significantly faster than the file system can absorb it.
A File System Accelerator and Application Optimizer as IME reorders application I/O to optimize flushing the cache to long term storage (enabling purchasing as little expensive cache possible).
A Read-optimized Application-I/O Accelerator that enables out-of-band API configuration of the IME appliance to optimize both reads and writes, allowing more simultaneous job runs, shortening the job queue and enabling significantly faster application run time to the user. The API integrates IME with the job schedulers and pre-stages / warms the cache for new jobs, accelerating first read.
IME DEPLOYMENT OPTIONS
IME14KX™ RACK SCALE FLASH ARRAY ZERO-COMPROMISE, PERFORMANCE-FIRST DESIGN PHILOSOPHY
The IME14KX is built upon our newest high-performance, hyper-converged SFA14K® hardware platform, which utilizes the industry’s newest components to maximize I/O performance. The result is high-speed tiering with non-volatile local and distributed memory pools, leveraging new multi-core processors, interconnects and memory technology. The IME14KX also incorporates the industry’s most extensive collection of PCIe fabric with a non-blocking architecture that is purpose-built to accelerate I/O and to maximize the performance acceleration of both the IME intelligent software and NVMe SSDs.
This highly innovative solution is designed to achieve significantly higher efficiency from applications, workflows, and computing environments by intelligent and predictive tie-ins to job scheduling services.
Each IME14KX can be configured to start with a small number of NVMe drives and scale up to 48NVMe SSDs, delivering an expected performance of starting at 10GB/sec for entry performance and scaling up to 50GB/sec in each 4U appliance. Scale multiple appliances to provide multiple TB/s of bandwidth to the most data intensive workloads on the globe.
IME240™ I/O PERFORMANCE-OPTIMIZED COMMODITY SERVER THE START SMALL, SCALE-OUT BUILDING BLOCK FOR IME SOFTWARE
The IME240 utilizes a standard 2U commodity storage server chassis that has been modified to remove I/O-performance-blocking components and adds ultra low latency InfiniBand connectivity. This platform has been tested and verified for compatibility.
The IME240 allows sites to realize the benefits of accelerating I/O, applications and file systems that IME delivers, but in a smaller 20GB/s bandwidth building block than the IME14K (listed in the section above)
Now, for environments that want the maximum deployment flexibility, IME is available as software only. IME software-only is ideal for those environments with IT-based limitations on supported hardware; that want the flexibility to repurpose existing hardware; and that have open-platform requirements. Most importantly, whether you decide to use IME14KX, IME240, or software-only, IME is the industry’s only burst buffer that is not locked into a specific server or storage hardware vendor. With DDN’s IME, the choice is yours.