• DDN | BLOG
    • THINK BIG
    • INSIGHT & PERSPECTIVES ON
      LEVERAGING BIG DATA

As SC13 is coming to an end, a new innovation era is beginning!

by Jean-Luc Chatelain Thursday, 21 November 2013

The carpets are about to be rolled back up at the Denver convention center following another SC; all the vendors are out of t-shirts and other tchotchkes; and, the brightest minds in the industry are going back to the warm embrace of their supercomputing clusters.

I can’t speak for all the other vendors at SC of course, but from an admittedly selfish point of view, I have to say that it has been another solid year for innovation at DDN – and I am extremely proud that our team has unveiled not just good innovation, but game-changing innovation this year.

Exascale Computing initiatives (including this one and this one) are great catalysts to drive everyone in the industry to dare to go big and go big, disruptively. I like disruptions as they are an antidote to apathy and status quo.

A year ago, at SC12, I was asked to put together an overall industry presentation on, what else, but the future of data-driven computing for Big Data and Exascale…so I did.

My assertion was (and still is) that current information architectures are heavily fragmented, silo’ed and overly complex. Many factors contribute to that. Some are business driven such as parochial approaches to data management. But, technology economics when it comes to memory and storage technology costs are mainly to blame. The significant delta between the price of DRAM and the price of “round and brown” storage has led to a multi-tiered approach to persistence with as much as five tiers between DRAM and the final safe resting place of the data. Scale started to rear its ugly head as more complexity has been added such as parallel file systems, automated tiering (HSM), distributed locking I/O subsystems, etc. The problem is that it is a losing battle as data volumes (and yes, variety and velocity) are growing at a faster pace than these approaches can successfully operationalized. A perverted side effect is that applications have to contend with the complexity of the I/O subsystem rather than focusing on the core business logic. The emergence of NAND-based flash technology has helped some but then again, has not fundamentally changed the architecture, but merely substituted one tier of technology for another or worse, in some cases even added more tiers.

Fundamentally, application writers do not want to have to worry about I/O and persistence. Life would be so much simpler if all could be done in DRAM and persistence just happened automagically.

To quote a very senior engineering executive at one of the largest business software companies, “I don’t want to do any I/Os, I just want to malloc() and free() and sometimes hibernate()”. The desire for simplification of the application layer and the need for faster time to results is really what’s driving the “In-Memory <insert favorite tool category>” movement which is afoot, and growing strong.

Those who know me are fully aware that I happen to be a big fan of the In-Memory approach but also a believer that Next Gen NV Memory (i.e PCM, ReRam, STRam) is the “path to application salvation”. The economics of these upcoming technologies are such that they will allow extremely large amounts of fast, static, low power memory to be right next to DRAM (for full disclosure, I am very biased toward ReRam), and with the right middleware layer can make I/O “disappear” as seen from the application layer. This will reduce the tiers of persistence to one, while eliminating complexity – with the important side effect being that the true storage layer will allow use of slow, very fat and very green, spinning media.

This is why the conclusion slide of my presentation on future information architectures for BigData and Exascale was as shown below:

This week, DDN unveiled the Infinite Memory Engine (IME) technology and its first instantiation, Burst Buffer, is driven by this vision of the future of information architecture. As some of my colleagues have written in this blog already, IME – deployed as an HPC burst buffer cache -is focused on dramatically accelerating parallel filesystem I/O, as well as implementing the steps that allow the HPC application to not have to worry about check pointing, file locking etc. while also not having to recode their applications to new object platforms or HPC middleware. All this dramatically reduces the infrastructure costs as it significantly drives down the number of spindles required to achieve very high throughput. It is a first step of many.  My end goal, however, is to one day be able to tell application writers that indeed, I/Os as we knew them are a thing of the past and yes, all you have to do is malloc() and free()…the rest is simply IME magic!

As the title of this blog suggests, we are at the beginning of a new era of innovation and I am deliriously proud to be part of the team that’s at the forefront of driving true innovation in HPC and Big Data.

In a follow up blog entry, I will expand on collaborative persistence and information lifecycle management for Big Data as there is a very good reason why IME “drains” in a WOS Cloud as depicted in the slide above.

To be continued…

TOP