Napatech Launches Set of New SmartNIC Capabilities

Napatech booth

Programmable Smart Network Interface Cards (SmartNICs) provider, Napatech, has released a number of new SmartNIC features that would allow unaltered, standard applications in edge and core data centers to take use of offloaded and accelerated networking and computation tasks.

Napatech’s programmable SmartNICs are used for Data Processing Unit (DPU) and Infrastructure Processing Unit (IPU) services in telecom, cloud, enterprise, cybersecurity and financial applications globally.

Enterprises, communications service providers, and cloud data center operators would increasingly use workload-specific coprocessors to offload operations like artificial intelligence (AI), machine learning (ML), storage, networking, and infrastructure services from general-purpose server CPUs as they deploy virtualized applications and services in core and edge data centers.

By running the offloaded workloads on equipment designed for those particular tasks, such as programmable SmartNICs, also known as Data Processing Units (DPUs) or Infrastructure Processing Units, this architectural approach would not only increase the availability of server compute resources for running applications and services but also enhance system-level performance and energy efficiency (IPUs).

Offload Trend, Global Data Center Deployments

Programmable SmartNICs are the fastest-growing subset of the NIC market, with a Total Available Market predicted by Omdia to reach $3.8B/year by 2026. This is due to the offload trend as well as an acceleration in global data center deployments.

Developers of cloud applications and services integrate industry-standard Application Programming Interfaces (APIs) and drivers within their software to maximize portability and shorten time to market. In order to avoid having to develop proprietary, vendor-specific versions of their software, data center operators must be able to choose offload solutions that are compliant with the pertinent standards, according to Napatech.

Release 4.4 of Napatech’s Link-Virtualization software would tackle this issue by integrating networking and virtual switching technologies that fully support the relevant open standards while providing “best-in-class” performance and functionality.

Link-Virtualization

Photo Jarrod J.S. Siket, CMO at Napatech
“Napatech’s Link-Virtualization software enables data center operators to optimize the performance of their networking infrastructure in a completely standards-compatible environment, which maximizes their flexibility in selecting applications,” said Jarrod J.S. Siket, CMO at Napatech.

In particular, Link-Virtualization now provides a fully hardware-offloaded implementation of the Linux Virtio 1.1 Input/Output (I/O) virtualization architecture, including the default kernel NIC interface, which means that visitor Virtual Machines (VMs) do not need a special or proprietary driver. To enhance the performance of features like Open Virtual Switch, Link-Virtualization additionally supports the open-standard Data Plane Development Kit (DPDK) fast-path running in guest VMs (OVS). Additionally, fully compatible with OpenStack, Link-Virtualization would enable seamless integration into cloud data center infrastructures everywhere.

The IPv6 VxLAN tunneling, RPM-based setup for OpenStack Packstack, adjustable Maximum Transmission Unit (MTU), live migration on packed rings, port-based Quality of Service (QoS) egress policing, and other new features are also included in Link-Virtualization. The software is offered on the range of SmartNICs from Napatech that support 1 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, 50 Gbps, and 100 Gbps port speeds and are powered by AMD (Xilinx) and Intel FPGAs.

One illustration of the “industry-leading performance” provided by Link-Virtualization is the complete offload of the OVS data path onto the SmartNIC, which requires just one host CPU core to operate the OVS control plane while providing throughput of 130 million packets per second for Port-to-Port (PTP) traffic and 55 million packets per second for Port-to-VM-to-Port (PVP) traffic.

Live Migration of Running Workloads

The number of servers required to support a particular workload or user base is significantly reduced by reclaiming host CPU cores that were previously required to run OVS and making them accessible to run apps and services. This would lead to a significant decrease in the entire CAPEX and OPEX of the data center. Additionally, the edge or cloud data center would use less power overall while being more energy efficient as a result.

Napatech offers an online ROI calculator that data center operators can use to examine their anticipated savings in order to assist in the estimation of cost and energy reductions for particular use cases.

“Napatech’s Link-Virtualization software enables data center operators to optimize the performance of their networking infrastructure in a completely standards-compatible environment, which maximizes their flexibility in selecting applications,” said Jarrod J.S. Siket, CMO at Napatech. “Besides full support for standard APIs, the solution also incorporates critical operational features such as Receive Side Scaling (RSS) for efficiently distributing network traffic to multiple VMs and Virtual Data Path Acceleration (vDPA), which enables the live migration of running workloads to and from any host, whether or not a SmartNIC is present.”