Cirrascale Cloud Services, a provider of public and private dedicated, multi-GPU cloud solutions enabling deep learning, has added a specialized NVMe hot tier storage offering to its cloud services platform. The new hot tier storage solution is powered by WekaIO Matrix, one of the world’s fastest file systems.
The new offering would help remove storage bottlenecks faced by customers who use inference training datasets consisting of millions of files to improve outcomes and increase accuracy of deep learning models.
“We have been extremely impressed with the significant speed up we have seen with processing large data sets and being able to keep GPUs receiving data from our back-end storage at tremendous speeds,” said Dave Driggers, CTO of Cirrascale Cloud Services. “With WekaIO Matrix, our customers are now able to experience the level of performance they expect by eliminating I/O starvation when running deep learning analytics.”
WekaIO’s Matrix software is a fully parallel and distributed file system that has been designed from scratch to leverage Flash technology. Both data and metadata are distributed across the entire storage infrastructure to ensure massively parallel access to NVMe drives. According to Cirrascale Cloud Services, WekaIO Matrix has demonstrated the ability to easily saturate a GPU cluster and deliver more than 10GBytes/second per node across an InfiniBand network.
WekaIO Matrix is available immediately as an AI Ecosystem Partner solution on Cirrascale’s Cloud Service platform, while current clients wanting to test the performance can sign up for a test drive.
“Our WekaIO Matrix software performs at its peak ability on the Cirrascale Cloud Services platform, providing an unmatched level of performance and scalability for customers in the AI space looking for flexibility in a cloud deployment,” said Andy Watson, CTO of WekaIO. “With WekaIO Matrix, they’ll be able to provide their growing list of deep learning customers a fully parallel and distributed file system that can handle the most demanding data and metadata intensive operations.”