DDN BLOG

My name is Mike Vildibill. As VP Product Management at DDN I spend a lot of my time managing technologies through an entire product lifecycle that begins with exploring emerging technologies and ends with a product’s retirement.  As the “business owner” to our products, the Product Management team at DDN must ensure that as a company we are building the right products, offering differentiation and value to our customers, and executing with efficiency and velocity.

In this blog series I hope to cover an interesting blend of topics that span the lifecycle of technology – with an emphasis on covering technologies and approaches that may challenge our assumptions.  One thing for certain is that technology is typically evolutionary only up until the time the next revolutionary thing comes along!  I hope that these blogs encourage you to participate in a constructive dialog about how and why the storage industry is evolving the way it is, as we contemplate what the Next Big Thing is going to be.

The idea of caching and buffering is certainly not new.  These things happen virtually everywhere within complex IT systems and can dramatically improve performance of CPUs, memory systems, network and storage I/O, etc.  Within the Exascale HPC community there is growing interest in establishing a new storage tier, commonly referred to as a Burst Buffer.  This tier resides in between the computer system and its adjoining disk-based parallel filesystem.  The idea is by placing fast non-volatile storage, such as SSDs, as a layer in between compute node memory and persistent storage such as a Lustre parallel filesystem, an application can have immediate access to vast amounts of data without having to deal with the tyranny of physics associated with accessing spinning media.  For a moment, just imagine trying to run a spindle head over a tiny spot of magnetic media that is spinning on a tiny platter that happens to be rotating thousands of revolutions per second; now imagine an application accessing thousands, or even tens of thousands, of data in parallel.  The phrase “many moving parts” comes to mind.  A burst buffer can help – in a big way.

What’s in a Word?

A word about semantics.  Yes, “buffer” and “cache” are indeed different things.  Buffering implies an intermediate staging area for data while it transits to more distant locations, whereas caching implies a staging area for data that may likely be re-used, perhaps extensively, before arriving at its final destination.  In the case of burst buffer, we are in fact talking about both buffering and caching.  For the semantic purists, perhaps we could call it Burst “Bacher” to reflect characteristics of both buffer and cache… but let’s not tackle that one right now.

The Good, The Bad, and the Bountiful The good news: SSDs can provide phenomenal bandwidth.  The bad news: relative to hard disks, the storage capacity of SSDs can be prohibitively expensive for many. If only we could achieve the bandwidth of SSDs with the capacity and price structure provided by hard disks.  This is where Burst Buffer comes into the picture.  The idea here is to attach non-volatile memory to a compute system to allow for very high bandwidth, while supporting mechanisms that allow the data to seamlessly flow between the SSDs and a disk-based high performance parallel filesystem, thereby providing access to bountiful storage capacity.  Voila! The best of both worlds.

Faster, Cheaper and More Efficient

By its very nature, burst buffer resides closer to the compute platform than does a parallel filesystem.  This means fewer hops, shorter communications path, and less electronics to pass through.  This translates to lower latency, lower cost, and greater bandwidth.  But there are even more important benefits to a burst buffer.  Typical parallel filesystems achieve perhaps only 1/4 of the potential bandwidth from its spinning disks due to somewhat random access patterns caused by unaligned, fragmented or otherwise disjoint payloads.  A burst buffer can enable greater alignment, coalescence or assembling of payload data, before it is sent to the spinning disks.  This can lead to dramatically greater bandwidth.  Likewise, by reading large chunks of data from disks into burst buffer, then a smaller subset of data can be delivered to the application, thereby reducing traffic across all of the electronics (servers, switches, cables, etc) that exist between compute nodes and parallel filesystem.  Goodness all around.

What are the Exascale Systems Designers Planning?

The U.S. Department of Energy is at the forefront of exploring the implications of burst buffering.  In fact, a recent DOE RFP includes requirements for a burst buffer option to provide capacity 3x greater than the compute platform’s aggregate memory and 1/30th the capacity of the adjoining parallel filesystem.  Depending on your math, the bandwidth of said burst buffer is expected to be perhaps 6x greater than to the parallel filesystem.  In other words, users want a burst buffer / filesystem cache that provides increased bandwidth, lower latency, and is coupled on the back-end to a parallel filesystem.  DOE intends to deploy such a system in 2H2015.  I think this is a glimpse into the future.

Enough About the Future, What About Today?

There is no shortage of applications for this type of I/O acceleration. For example,  The Wellcome Trust Sanger Institute is a leader in data-intensive science.  Within The Institute’s Illumina Production Sequencing core facility they manage 27 high-speed DNA sequencers which each spits out untold volumes of data, 24x7x365.   The typical data flow is for data to emerge from the sequencer and to an analysis and alignment engine.  From there, the data typically is moved to a server farm for number crunching.  In such a workflow, there is opportunity for buffering the sequence data into high-speed non-volatile memory and from that location to cache the data, as needed, to compute jobs running on a nearby compute farm(s).  This combination of burst buffering plus caching improves effective bandwidth, reduces latencies, and also minimizes the unnecessary movement of data.

DDN’s best and brightest have been working hard on these things.  You may have noticed recent announcements about DDN’s SFX (Storage Fusion Xcelerator) technology which places lightning fast SSDs on the front side of a Lustre parallel filesystem in order to deliver the benefits of buffering and caching.  And this is just the beginning.  Look to future DDN Blogs on this…