The use of AI/ML workloads has become a foundation of successful data strategies in modern organizations regardless of their size or industry focus. However, to deliver meaningful insights from raw data in optimum timeframes, AI/ML workloads often consume significant compute resources, requiring costly upgrades and complex management.
Organizations can ensure efficiency of their AI/ML infrastructure by deploying optimized hardware and virtualizing their memory resources. By moving away from the costly DRAM to more efficient memory solutions, they can gain sufficient capacity for processing data-intensive workloads while minimizing their IT costs.
In this webinar, experts from phoenixNAP, The Translational Genomics Research Institute (TGen), MemVerge, and Intel will provide practical guidance for using Intel® Optane™ Persistent Memory (PMem) and MemVerge Memory Machine™ software to improve AI/ML pipelines.
Watch this session to learn how to build an optimized infrastructure for AI/ML workloads using the latest hardware technologies.
The webinar covers:
Lesson 1: Infrastructure Challenges for AI/ML Processing (Gerald Kunstman, phoenixNAP)
Lesson 2: Single Cell Sequencing Best Practices (Glen Otero, TGen)
Lesson 3: Optimizing Memory Use with Intel Optane Persistent Memory (Sridhar Kayathi, Intel)
Lesson 4: Memory Virtualization Benefits and Best Practices (Charlie Yu, MemVerge)
Publisher: phoenixNAP Global IT Services
You can watch this video also at the source.