To give users the performance of GPUs as virtualized, shared resources, Verge.io, a company providing a unique method to virtualize data centers, has added major new functionality to its Verge-OS software. It would create a “cost-effective, simple and flexible” way to perform GPU-based machine learning, remote desktop, and other compute-intensive workloads within an “agile, scalable, secured” Verge-OS virtual data center.
Verge-OS would help create feature-rich infrastructures for environments and workloads like clustered HPC in universities; ultra-converged and hyperconverged enterprises; DevOps and Test/Dev; compliant medical and healthcare; remote and edge compute including VDI; and xSPs offering hosted services including private clouds by abstracting compute, network, and storage from commodity servers and creating pools of raw resources that are simple to run and manage.
Verge-OS produces pools of raw resources that would be easy to run and manage as well as feature-rich infrastructures for environments and applications by abstracting computation, network, and storage from commodity servers. Examples of use cases include clustered HPC in academic institutions, ultra- and hyperconverged businesses, DevOps and Test/Dev, compliance medical and healthcare, remote and edge computing, including virtual desktop infrastructure, and xSPs that provide hosted services, such as private clouds.
Currently available system-wide GPU deployment techniques would be quite complicated and expensive, especially for remote users. Verge.io would enable users and applications with access to a virtual data center to share the processing resources of a single GPU-equipped server rather than distributing GPUs around the enterprise. By simply establishing a virtual machine with access to the installed GPU and its resources, users or administrators can ‘pass through’ the GPU to a virtual data center.
An alternative is to use Verge.io to control GPU virtualization and provide vGPUs to virtual data centers. As a result, businesses can manage vGPUs on the same platform as all other shared resources with ease.
“The market is looking for simplicity, and Verge-OS is like an ‘Easy Button’ for creating a virtual cloud that is so much faster and easier to set up than a private cloud,” said Darren Pulsipher, Chief Solution Architect of Public Sector at Intel. “With Verge-OS, my customers can migrate and manage their data centers anywhere and upgrade their hardware with zero downtime.”
Hypervisor, Networking, Storage Solutions

“The ability to deploy GPU in a virtualized, converged environment, and access that performance as needed, even remotely, radically reduces the investment in hardware while simplifying management,” said Yan Ness, Chief Executive Officer (CEO) at Verge.io. “Our users are increasingly needing GPU performance, from scientific research to machine learning, so vGPU and GPU Passthrough are simple ways to share and pool GPU resources as they do with the rest of their processing capabilities.”
Verge-OS is an ultra-thin software – less than 300,000 lines of code – that would be quite easy to install and scale on “low-cost” commodity hardware. It would also self-manage the system based on AI/ML.
To streamline operations and shrink complex technology stacks, a single license replaces different hypervisor, networking, storage, data security, and administration solutions.
All enterprise data services, such as global deduplication, disaster recovery, continuous data protection, snapshots, long-distance synch, and auto-failover are included in secure virtual data centers based on Verge-OS. To comply with rules like HIPAA, CUI, SOX, NIST, and PCI, they are perfect for building honeypots, sandboxes, cyber ranges, air-gapped computers, and secure compliance enclaves. Service providers, departmental companies, and campuses can assign resources and services to groups and subgroups thanks to nested multi-tenancy.
Verge.io currently only supports NVIDIA Tesla and Ampere cards; vGPU support requires the purchase of extra licenses.