DDN BLOG

2019 is well underway, and we’re expecting the new year to continue to bring new advances in in AI and Deep Learning—not to mention the ever-increasing number of data-intensive workloads that are certain to push the boundaries of today’s storage systems. Here’s a closer look at some of the storage trends and predictions that will be top of mind in 2019:

1. The emergence of large-scale AI and Machine Learning deployments.The past couple of years have seen many organizations trialing machine learning algorithms on small data sets, implementing small but growing deployments, and planning at-scale infrastructures. We have already seen successful projects emerge for predictive analytics for chronic disease management, workflow enhancement in radiology as well as administrative and financial use cases that bring operational efficiency to these industries. 2019 will be the year that large-scale AI and machine learning environments emerge in mass, with organizations moving from deployments of 4, 8 or 16 GPUs to deployments that range from hundreds to thousands of GPUs. At-scale AI and machine learning environments pose unique challenges, including demanding analytical workload characteristics of both read intensive and random I/O, and the fact that normal caching techniques used in testing are not scalable enough to cope with hundreds of terabytes of data within the flat-cache or with petabytes of data on the backend. Successful at-scale AI and machine learning infrastructures will require storage systems that are able to scale massively and transform the I/O with flash cache layers that sit between the application and file systems.

2. The advancement of granular data management capabilities for at-scale data systems and private clouds. At-scale data systems and private clouds are increasingly supporting diverse types of data, such as AI and deep learning workflows, that require advanced, granular data management capabilities. These data management solutions will need to deliver simplicity, allow for more sophisticated tagging and searching of the data itself, and provide insight into the types of data that customers have within their systems and within their clouds. The sheer amount of data required for deep learning projects, especially as some businesses begin to focus on deploying AI models that work better for real world problems, having data from disparate sources clearly defined, labeled and discoverable will allow nimble companies to move quickly. Equally important is the efficient storage of long-term data – information that might not appear to be of value today, but in six months, or 2 years may provide a unique differentiation.

3. The move toward cloud-like data management models for on-premise deployments with transparent mobility to public cloud. The need for better, more granular data management capabilities for at-scale data systems and private clouds will drive the adoption of “cloud” models for on-premise environments. Areas such as security and multi-tenancy are becoming increasingly important as organizations want to be able to allocate specific parts of data collections to users, groups or business units to allow either collaborative access or not, and to provide services such as quality assurance and guaranteed performance, especially for latency-critical applications. Cloud and cloud-native models have always been enabled by multi-tenant capabilities, but it is new and needed for data-intensive, on-premise workloads. Equally important will be the portability of workloads – as public cloud systems become more capable in delivering high performance compute and storage capabilities, on-prem systems need to provide on ramps that can optimize data placement for most efficient operations.

4. Major steps towards delivering autonomous storage. Storage systems themselves will also benefit from the refinement of algorithms and analytics by starting to implement machine learning that will AI play an increasing role in supporting and even automating decisions. Storage vendors that are already on their way towards gathering data about the data on their systems will be better placed to take advantage of this. Additionally, storage systems that already integrate tightly with VMs, containers and the applications found in them will be able to automatically rebalance to ensure those applications are running optimally and fully protected against failures. They key is for systems to go beyond capacity and performance and delve into the needs and characteristics of the applications run on those systems. This combination of capabilities will allow IT organizations to concentrate on the value found in their data and less (no) time struggling to optimize cost, lower risk, or deliver performance manually.

5. Accelerated adoption of at-scale Flash deployments and the ascendency of NVMe. The adoption of scale-out storage architectures to manage flash on demand and to scale performance as needed will accelerate as flash usage provides an optimal means for balancing performance and cost in long-term storage, and as the price of flash storage will drop significantly in the first six months of 2019. NVMe will be the default media for tier-1 applications (low latency, high IOPs and density – what’s not to like?), but NVMeOF will continue to lag as a networking standard as other more established RDMA networks like InfiniBand and RoCE continue to thrive and meet performance demands.

At DDN, we’re excited to see these trends evolve and are poised to help you take advantage of them with the most innovative, cutting-edge storage systems to the market! We look forward talking with you in 2019, whether it’s at the upcoming NVIDIA GTC events or the plethora of trade shows we’ll be attending this year. If you have any curiosities in the meantime, you can always reach out to our technical experts and they would be happy to answer any questions you may have.

We look forward to seeing you soon!

  • Kurt Kuckein
  • Kurt Kuckein
  • Sr. Director, Marketing
  • Date: January 30, 2019