Tencent Cloud has announced to adopt NVIDIA Tesla GPU accelerators to help advance artificial intelligence for enterprise customers. Tencent Cloud will integrate NVIDIA’s GPU computing and deep learning platform into its public cloud computing platform.
This will provide users with access to a set of new cloud services powered by Tesla GPU accelerators, including the latest Pascal architecture-based Tesla P100 and P40 GPU accelerators with NVIDIA NVLink technology for connecting multiple GPUs and NVIDIA deep learning software.
NVIDIA’s AI computing technology is used worldwide by cloud service providers, enterprises, startups and research organizations for a wide range of applications.
“Companies around the world are harnessing their data with our AI computing technology to create breakthrough products and services,” said Ian Buck, general manager of Accelerated Computing, NVIDIA. “Through Tencent Cloud, more companies will have access to NVIDIA’s deep learning platform, the world’s most broadly adopted AI platform.”
Highly efficient parallel processing capabilities of GPUs would make the NVIDIA computing platform highly effective at accelerating a host of other data-intensive workloads, including advanced analytics and high performance computing.
“Tencent Cloud GPU offerings with NVIDIA’s deep learning platform will help companies in China rapidly integrate AI capabilities into their products and services,” said Sam Xie, vice president, Tencent Cloud. “Our customers will gain greater computing flexibility and power, giving them a powerful competitive advantage.”
As part of the companies’ collaboration, Tencent Cloud intends to offer customers a wide range of cloud products based on NVIDIA’s AI computing platforms. This will include GPU cloud servers incorporating NVIDIA Tesla P100, P40 and M40 GPU accelerators and NVIDIA deep learning software. Tencent Cloud launched GPU servers based on NVIDIA Tesla M40 GPUs and NVIDIA deep learning software in December.
During the first half of this year, these cloud servers will integrate up to eight GPU accelerators, providing users with “superior” performance while meeting the requirements for deep learning and algorithms that involve ultra-high data volume and ultra-sized equipment.