Server vendor Supermicro, a global provider of enterprise computing, storage, networking solutions and green computing technology, is among the first to adopt the NVIDIA HGX-2 cloud server platform to develop “powerful” server systems for artificial intelligence (AI) and high-performance computing (HPC).

“To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform that will deliver more than double the performance,” said Charles Liang, president and CEO of Supermicro. “The HGX-2 system will enable efficient training of complex models. It combines 16 Tesla V100 32GB SXM3 GPUs connected via NVLink and NVSwitch to work as a unified 2 PetaFlop accelerator with half a terabyte of aggregate memory to deliver unmatched compute power.”

Supermicro’s HGX-2 based systems would provide a superset design for data centers accelerating AI and HPC in the cloud. With fine-tuned optimizations, Supermicro’s HGX-2 server would deliver the highest compute performance and memory for rapid model training.

GPU-accelerated Computing

“As AI model complexity and size are exploding, researchers and data scientists need new levels of GPU-accelerated computing,” said Ian Buck, vice president and general manager of accelerated computing at NVIDIA. “HGX-2 provides the power to handle these massive new models for faster training of advanced AI, while saving significant cost, space and energy in the data center.”

For comprehensive information on Supermicro NVIDIA GPU system product lines, visit their website here.