Many leading financial services firms are facing a worst case scenario caused by unprecedented data deluge. What is driving this? An explosive growth in number of clients/users relying on data intensive use cases; financial workflows becoming extremely data driven; and near exponential growth in the data created, stored and analyzed across datacenters in most financial services companies. Looming regulatory requirements mean that companies will increasingly need to provision and analyze a lot more data than they ever have before.
So, how has this been addressed so far? Most of the financial services data centers have relied on legacy architectures like DAS and NAS. While those architectures and “solutions” worked for smaller data volumes, data center managers in most financial services firms are quickly realizing that – as capacity, model size and client counts increase, their infrastructure can’t scale to meet new demands.
To raise performance, data center managers usually try one of 2 approaches: 1) build additional DAS or NAS environments like the ones they already have to deliver more scale, or 2) put something in front of a current DAS or NAS to make it faster – usually a flash device that acts as a performance cache. Despite their best intentions, because of the fundamental architectural limitations of DAS and NAS, key workloads like kdb+, SAS and other HADOOP-based applications continue to experience degraded performance and are scaling impaired. This vicious cycle of buying expensive, non-performant hardware is a futile attempt to cater to an increasing internal client base, causing corrosive budget increases that often result in deplorable ROI. This approach to scaling up of infrastructure results in scaling up of costs and complexity while delivering poor performance. There is a better solution—and successful firms are already using it.
DDN solutions are designed to address data intensive problems at scale. We have been solving this very issue in some of the most demanding, mission critical production environments including Oil and Gas, Manufacturing, Financial Services and National Laboratories. Using DDN’s innovative advances, our customers have been able to scale to 100,000 clients and infrastructure while delivering high performance, density and unrivaled capability for extreme scale data intensive applications. And we have numbers to prove the difference we can make for financial services firms.
Securities Technology Analysis Center (STAC) has just released STAC-M3 results for a system consisting of Kx Systems kdb+ 3.2 running on Intel Enterprise Edition for Lustre 2.2 with a DDN SFA12KX-40 storage platform. Two benchmark suites ANTUCO and KANAGA were executed by STAC and DDN/intel. ANTUCO uses a limited dataset size of 4.5TB to simulate performance that would be obtained with a dataset residing mostly on non-volatile media, and studies a broad range of read and write operations. KANAGA, on the other hand, studies performance on large datasets that range from 33TB to 897 TB, with large numbers of concurrent requests.
These are the first public STAC-M3 results on a parallel file system, and the results were spectacular! In this benchmark, using a combination of a parallel file system and a distributed database query for many of the STAC M3 workload patterns, it was conclusively demonstrated that the DDN-Intel Enterprise Edition for Lustre solution will outperform direct attached (internal) SSD based solutions. The benefit of this approach is it utilizes a shared data model which enables consolidation of data into a single namespace and as well as massively parallel data queries, while dramatically improving performance and reducing data management costs. Here are the key highlights:
- The DDN SFA12K-X driven infrastructure set world records in the 1- and 10-thread internalized statistics benchmark
- On average, we demonstrated 50% better latency characteristics on 13 of the most I/O intensive workloads, (compared to the prior iteration of the benchmarks)
- The SFA12K-X platform set world records in the scale oriented Kanaga benchmarks and delivered up to 50% improvement in 2-yr and 4yr high bid benchmarks, and 2.4X improvement in 1-yr high bid benchmarks.
- For scale oriented workloads (characterized by KANAGA benchmarks) the SFA12KX platform delivered on average 40% improved latency characteristics across all workloads
Another important highlight to be taken from this is that customers can now use low cost commodity servers with far lower amounts of memory, e.g., 64GB, and can still expect extremely high performance regardless of the workload type or size because DDN can move data from persistent storage to RAM at line rate.
The STAC benchmark proved what many DDN financial services customers in proprietary trading firms, hedge funds and investment banks experience on a daily basis: highest productivity, best performance, and density at lower CAPEX/OPEX directly resulting in billions of dollars in savings. I encourage you to talk with our technical experts to demonstrate how DDN can specifically help you leverage your data more efficiently and cost-effectively.
For more information on the STAC testing, please visit http://www.STACresearch.com/ddn
For information on the SFA12K-X, please visit https://www.ddn.com/products/