What is Latency
Latency is the time one waits for a task to complete. Ultra-low latency is reducing that wait time down to the limits physics imposes on the execution of the task. In networking, we often measure latency by the half round trip, that’s the time for a single receive plus a single send.
Solarflare NIC ASIC
Lowest possible latency with < 500 nanoseconds from wire to user space
XtremeScale Packet Engine
Typical network interface cards (NICs) often take seven or more microseconds, 7,000 nanoseconds, for a half round trip. Under normal network conditions Solarflare reduces this latency to roughly five microseconds. Unlike other NIC vendors which offer a single method of OS Bypass called Data Plane Development Kit (DPDK), Solarflare offers a family of five techniques that it calls Universal Kernel Bypass (UKB). This family has two key approaches: accelerating raw network packets, or higher level interfaces based on traditional application sockets. For raw network packets Solarflare provides DPDK and its own unique layer Ether Fabric Virtual Interface (EF_VI). For sockets applications Solarflare offers ScaleOut Onload, Onload and TCPDirect.
Intel initially developed the first DPDK set of drivers for their family of adapters, but it has since become a standard offering by most NIC providers looking to offer a consistent low-level form of kernel bypass. These libraries can be used to send and receive packets with a minimum number of the CPU cycles. Provide a fast packet capture interface, and run third party user space TCP/UDP stacks. Initially Solarflare provided a build to DPDK.org in early 2017 based on their internal hardware abstraction layer. Since then they’ve released a native version for Solarflare hardware to DPDK.org which will significantly increase packet performance over the original version.
Like DPDK Solarflare’s Ether Fabric Virtual Interface (EF_VI) is a very low level method to access the XtremeScale X1 architecture at a packet level. It enables applications to directly manipulate the 2,048 Virtual Network Interface Card queues available on Solarflare NICs. In doing so it provides the lowest possible latencies and highest available packet rates. Unlike DPDK though a developer can just register into the XtremeScale X1 NIC which application flows it wants to accelerate. All others will remain untouched, and be steered to the kernel or other applications. This frees EF_VI from all the house keeping tasks that often burden down DPDK implementations
The eXpress Data Path (XDP) is a new feature within Linux. It provides an extremely high performance path through the kernel device driver to permit the system to very efficiently accept, drop or forward packets based on compiled byte code representation of Berkley Packet Filters. Support for XDP is provided by the kernel driver supplied by the NIC vendor. Since primitive packet decisions are made prior to the kernel’s ISO stack processing the packet, this is considered a light weight form of kernel bypass. Solarflare’s latest kernel driver has support for XDP.
ScaleOut Onload is a slimmed down version of Onload that is provided with all XtremeScale X1 adapters at no additional costs. It provides a full POSIX compliant TCP only sockets interface for accelerating applications, but UDP acceleration support is NOT included. Most network applications, like web hosting, are almost exclusively TCP. To clearly segment the ultra-low latency market ScaleOut Onload also does not have access to the Ultra-Low Latency firmware in the XtremeScale X1. Regardless, it delivers typical half round trip performance between 2,000 to 3,000 nanoseconds, by contrast the normal kernel TCP stack is about 7,000 nanoseconds. UDP traffic will be routed through the normal kernel stack. ScaleOut Onload also includes an advanced socket reuse algorithm that enable it to more effectively manage significantly larger numbers of active connections affording applications like Nginx a 2-3X performance boost.
Onload is a high-performance POSIX compliant network stack from Solarflare that dramatically reduces latency and x86 utilization, while increasing throughput and reducing latency. Onload supports TCP/UDP/IP network protocols by providing a standard Berkley Sockets Direct (BSD) Application Programming Interface (API), and requires no modifications to end user applications. Onload delivers half round trip latency in the 1,000 nanosecond range, by contrast the typical kernel stack latency is about 7,000 nanoseconds. It achieves these performance improvements in part by performing network processing in user, bypassing the OS kernel entirely, reducing the number of data copies, and kernel context switches. Networking performance is improved without sacrificing the security and multiplexing functions that the OS kernel normally provides.
The TCPDirect API builds on Onload by providing an interface to an implementation of TCP and UDP over IP. TCPDirect is dynamically linked into the address space of user-mode applications, and granted direct (but safe) access to the XtremeScale X1 hardware. The result is that data can be transmitted to and received from the network directly by the application, without involvement of the operating system. This technique is known as 'kernel bypass'. Kernel bypass avoids disruptive events such as system calls, context switches and interrupts and so increases the efficiency with which a processor can execute application code. This also directly reduces the host processing overhead, typically by a factor of two, leaving more CPU time available for application processing. This effect ismost pronounced for applications which are network intensive. TCPDirect under very specific circumstances with ideal hardware, can reduce latency down from 1,000 nanoseconds to 20-30 nanoseconds, a 200X reduction over competing NICs.