Tech Innovation / Thought Leadership / November 16, 2020
For Data-Centric Enterprises, NVIDIA Mellanox InfiniBand Networking Makes the Most Sense
At DDN we see the constant improvement of the data path as essential to further enabling customers’ data-centric applications like artificial intelligence (AI), modeling, and analytics. The new NVIDIA Mellanox InfiniBand architecture, NVIDIA Mellanox NDR 400Gb/s, helps fuel the next generation of applications that require an optimized, end-to-end architecture from storage media to processor...
Tech Innovation / Thought Leadership / November 13, 2020
DDN’s A3I Gets Faster and Simpler With GPUDirect Storage and the Superpod RA
DDN breaks its own record as the fastest AI storage with DGX A100: 162 GiB/s delivered directly to GPUs, 60x more than NFS and supplies Reference Architecture with NVIDIA for DGX SuperPOD customers...
Promo / Tech Innovation / Thought Leadership / November 9, 2020
Google Cloud and DDN Partnership Modernizes Datacentric Applications
It is no question that cloud interest, spending, and usage is growing in all IT environments. Google Cloud and DDN each bring specialized native expertise to the partnership that offers benefits for a more agile and flexible environment for data intensive organizations...
Promo / Tech Innovation / Thought Leadership / October 5, 2020
Simplifying the Challenges of Scalable AI with NVIDIA DGX SuperPOD and A3I
AI is transforming workflows across industries, accelerating research, optimizing manufacturing, and creating new products in financial services...
Promo / Tech Innovation / Thought Leadership / August 27, 2020
Removing Risk From AI at Scale
The need for faster data infrastructure to support real-world AI platforms increased markedly this past few months. The broadening use cases for natural language processing, real-time video inference...
Promo / Tech Innovation / Thought Leadership / August 12, 2020
DDN AI Storage Gets Faster and Simpler with GPUDirect Storage
DDN breaks its own record performance with DGX A100: 178 GB/s delivered directly to GPUs, 60x more than NFS. How do organizations ensure that their environment is running ideally when the scales get large? Is your investment in GPU Infrastructure, data scientists, data sources and ingestion all optimized with the headroom to accelerate value from data?