• DDN | BLOG
    • THINK BIG
    • INSIGHT & PERSPECTIVES ON
      LEVERAGING BIG DATA

Among some of the biggest topics for storage last year were the transition to SSDs and the rise of burst buffers. At DDN, we were at the forefront of these trends re-energizing HPC workflows with the introduction of new solutions for data acceleration, exascale, object storage, and the cloud.

As we begin 2017, we continue to see the demand for technologies that support next-generation and exascale computing infrastructures. Flash, object storage, and cloud will continue to dominate as organizations continue to seek high performance at a lower cost.

Here’s a look at some of the storage trends we expect to see in 2017:

Burst buffer is coming of age in HPC environments. Growing numbers of HPC environments will turn to burst buffer technology as a faster, more efficient way to support data-intense persistent storage (offloading I/O from compute resources, separating storage bandwidth acquisitions from capability acquisition, and supporting parallel file systems) now that proven customer benefit evidence exists. Additionally, expect burst buffer use cases to expand to include file system acceleration (or application optimization) use cases and core extender (or read-optimized application I/O accelerator) use cases.

Flash usage will continue to dominate in HPC environments. Though all flash arrays will still be in limited use according to our annual survey, flash in more tiers and for more focused usage at multiple levels will dominate flash deployments. The debate will continue about how to take optimal advantage of flash – whether an all-flash array or flash as a software-defined tier is the best approach to accelerate metadata, data, and/or applications. In any case, more organizations are deploying flash to accelerate application and I/O performance. Storage vendors will have to look beyond applications and accelerate entire workflows – which means placing flash in multiple tiers of storageFlash will increasingly be placed throughout the workflow to reduce latency even more and to increase application performance by orders of magnitude. This move will further blur the line between caching and storage, as the flash layer will naturally become more persistent as performance-intensive applications rely upon this ultra-fast tier to deliver results at the highest speeds.

Object storage becomes a “common” part of the (archive, on premise, multi-site, private/hybrid cloud) storage mix. We have been talking for years now about “the year” that object storage will finally catch on. As massive data growth continues to plague storage administrators, and with advances in integration and user acceptance, 2017 is finally “the year.”

In order to remain competitive, organizations will take a new look at their storage strategies and finally adopt more cost-effective storage platforms that include object storage that supports their peta-scale and soon exascale environments. There will finally be a realization of object storage as a stable, proven solution that is helping maximize storage density, increase utilization, reduce scaling complexity, improve performance, and optimize TCO. Further, the seamless integration that now makes deploying and using object storage easy and transparent to end users will be key to increased adoption. Not only will end-users not know they are using object storage in the background (because it is so seamless and easy), but also administrators may not know either. They will simply know their storage solution is easier to manage, expand, and maintain that ever before, even as it grows beyond expectations. As a result, organizations will increasingly add object storage to their overall mix for a better accommodation of this continued data expansion over time.

HPC environments adjust their public cloud strategies. Aggressive public cloud adoption plans will shift (at least in the near-term) to private and hybrid cloud strategies – a response to cost, latency, and sheer data immobility issues in public cloud trials. A recent DDN user survey found that 37 percent of HPC respondents are planning to leverage a cloud for at least part of their data in 2017 – up almost 10 percent from last year.  But of those, an overwhelming majority (more than 80 percent) is choosing private and hybrid clouds vs. a public cloud only option.

Leading data-intensive enterprises and HPC centers will continue to find creative ways to break performance barriers, completely redefining space and cost requirements. Similar to a recent move by Yahoo! JAPAN to implement an inter-continental active archiving solution that provides a 74 percent energy cost savings, expect to see more HPC-powered organizations work with technology vendors to come up with truly innovative, new ways to overcome cost and performance challenges created by the continued massive data growth.

More choice and performance in HPC networking. As Intel Omni-Path continues to gain traction, expect it to go head-to-head with Infiniband as Mellanox fights back with its 200 Gbit/s option. We expect that while Omni-Path will get a lot of attention and new systems, the next round may go to Mellanox. As a direct connection to storage, however, it will still be a narrow field, with DDN in the lead. You can listen to JCAHPC talk about their Omni-path success here.

SSD supplies will continue to tighten. The manufacturing imbalance with demand will continue in 2017. As such, the $/GB will not be dropping as fast as it usually does for new storage media, and there are plenty of questions about how the supply issue will affect the transition from spinning media. In addition, hybrid systems with some flash and spinning media will continue to be strong through 2017 as customers optimize performance and cost of capacity.

Field-programmable gate arrays (FPGAs) will continue to be a credible threat to traditional microprocessors. With the perceived lack of innovation of market leaders, once again the door will be open in 2017 to challengers like ARM.

We look forward to continuing on our path of data storage innovation by delivering solutions that provide the best price per performance, capacity, and latency along with software that accelerates and optimally manages end-to-end workflows.

Cheers to a new year of industry leadership, collaboration, and success!

  • Michael King