On Demand: DDN’s Annual User Group | Denver, Co 2023
The NVIDIA DGX™ H100 system features eight NVIDIA GPUs and two Intel® Xeon® Scalable Processors
DDN Summit Presentation
Marc Hamilton | Vice President of Solutions Architecture & Engineering | NVIDIA
Marc Hamilton takes a closer look at the impact of Generative AI on storage workloads, with guidelines for storage performance using NVIDIA GDS, and shows how NVIDIA uses digital twins for the evaluation of thermal airflow and liquid cooling strategies in large-scale systems design. Announcing the latest performance records for the NVIDIA Eos Exascale system, Marc also discusses the next generation of supercomputers for scientific discovery.
Genome Sequencing at Scale with NVIDIA H100 and DDN AI400X2
Chuck Seberino | Director of Accelerated Computing | Roche Sequencing Solutions
In this session, Chuck Seberino looks at the challenges faced by Roche Sequencing Solutions as they evaluated their cloud vs on-prem strategy for managing up to 25 PB of raw sensor data per day from nanopore genomic sequencing systems, and ultimately reduced their turn-around-time from over two weeks to just two days.
Why Storage Performance Matters for Deep Learning and LLM Optimization
David Hall | Head of High Performance Computing | Lambda
David Hall shares the key factors to consider when building infrastructure for Generative AI and Large Language Models, and choosing the right combination of expertise and technology. David also looked at how the high global demand for GPU technologies is impacting supply chains and availability, and the need to maximize infrastructure productivity and utilization.
Welcome to the DDN User Group
Alex Bouzari | CEO | DDN
Alex Bouzari, CEO of DDN, welcomes the audience to the DDN User Group 2023, with a preview of DDN’s vision for AI at the Edge, in the Datacenter, and in the cloud.
The Dawn of a New AI Storage Paradigm
James Coomer & Sven Oehme | SVP Products & CTO | DDN
James Coomer and Sven Oehme share the latest news on DDN Infinia, the new optimized data platform from DDN to accelerate Generative AI, LLMs and Big Data. With the ability to accelerate S3 workloads by 10-100x, and native multi-tenancy and QoS management, DDN Infinia’s software-defined storage architecture complements DDN’s EXAScaler platform to extend DDN’s leadership in accelerated computing storage for AI.
Charting the Future Roadmap for EXAScaler and Beyond
Andreas Dilger | Lustre CTO | Whamcloud
In this session Andreas Dilger gives an update on the Lustre community and trends in high-performance computing, Andreas also shares the enhancements included in Lustre and EXAScaler to support high-performance QLC storage, and Client-side compression, and highlights the capabilities available for Security, Multitenancy and Workload Isolation.
Deploying Large AI Clusters & Storage for LLMs
Paul Wallace | Product Managing Director | DDN
Paul Wallace reviews recent research on Generative AI and LLMs, highlighting the dependence on both read and write performance, and the impact of checkpointing on AI productivity, with guidelines for sizing AI systems and storage using the latest NVIDIA DGX H100 systems.