Cirrascale Cloud Services, a provider of public and private dedicated cloud solutions from San Diego, CA, has added NVIDIA’s A100 80GB and A30 Tensor Core GPUs to its multi-GPU deep learning cloud servers. The Cirrascale offering would provide enterprise customers with mainstream options for a broad range of AI inference, training, graphics, and traditional enterprise compute workloads.
The NVIDIA A100 80GB Tensor Core GPU delivers game-changing innovations for inference workload optimization, stated Cirrascale. It would accelerate precisions ranging from FP32 to INT4. For best computing resource usage, Multi-Instance GPU (MIG) technology would allow up to 7 instances with up to 10GB of RAM to run simultaneously on a single A100.
“Model sizes and datasets in general are growing fast and our customers are searching for the best solutions to increase overall performance and memory bandwidth to tackle their workloads in record time,” said Mike LaPan, vice president, Cirrascale Cloud Services. “The NVIDIA A100 80GB Tensor Core GPU delivers this and more. Along with the new A30 Tensor Core GPU with 24GB HBM2 memory, these GPUs enable today’s elastic data center and deliver maximum value for enterprises.”
The NVIDIA A30 Tensor Core GPU is also accessible through Cirrascale Cloud Services. It would offer adaptable performance for a wide range of AI inference and standard enterprise compute workloads like recommender systems, conversational AI, and computer vision.
The A30 also supports MIG technology, which would offer a strong price/performance ratio with up to four instances each with 6GB of memory, making it ideal for entry-level applications.
In short, Cirrascale’s accelerated cloud server solutions with NVIDIA A30 GPUs would provide the needed compute power – as well as large HBM2 memory, 933GB/sec of memory bandwidth, and scalability via NVIDIA NVLink interconnect technology – required to tackle massive datasets and turn them into valuable insights.
“Users deploying the world’s most powerful GPUs within Cirrascale Cloud Services can accelerate their compute-intensive machine learning and AI workflows better than ever,” said Paresh Kharya, senior director of Product Management, Data Center Computing at NVIDIA.