Some communication service providers (CommSPs) find challenges as they onboard and orchestrate virtual network functions (VNFs) due in part to hardware and software interdependencies. Application data planes are dependent on hardware accelerators and specific platform configurations that are not cloud native. Evolving standards for input/output virtualization, such as VirtIO, have performance limitations. Single root i/o virtualization (SRIOV) addresses these performance requirements, but it comes at the cost of reduced flexibility (e.g. lack of live migration support).
We have not reached the level of disaggregation in networking where these layers are completely independent and abstracted from each other, but I don’t think that’s the goal. As with many technology strategies, there is a range of options and no one-size-fits-all approach.
The high-level, ETSI network functions virtualization (NFV) framework has helped us simplify and compartmentalize VNF, MANO and NFV infrastructure abstractions. As we have scaled VNFs, it is apparent that NFV data planes need additional focus. This blog explores a few options that may help us address this challenge.
Balancing Disaggregation & Availability between Hardware and Software Data Planes
We need to drive hardware abstraction to remove dependencies from the software data planes and ensure that today’s hardware accelerators and next generation new instruction optimizations are usable by the VNFs Intel’s approach is to integrate competitive Packet Processing and Data Plane features in the Intel® Xeon® processor for ‘Application and VNF acceleration’ to complement the infrastructure acceleration capabilities of standard NICs and SmartNICs. Intel’s view is that this platform partitioning, such as infrastructure acceleration in NICs and SmartNICs, and application acceleration in CPUs, will deliver the scalability required for the next generation of applications that will be increasingly dependency on packet data.
Cloud Native applications require that the network services are abstracted and easy to use without any platform dependencies, while delivering higher throughput and lower latencies. Abstraction between hardware and software continues where data planes further evolve to incorporate industry-standard network hardware, common APIs, and are supported by additional automation to address the decoupling of the layered systems.
Open Source Community Supports Evolution of Abstraction with Data Plane Acceleration Kit (DPDK)
Data plane VNFs require performant network interfaces, line data rates, and minimum latency, all with minimal cost. The data plane development kit (DPDK) is part of this evolution. The community updates to this framework for de-facto NIC and hardware abstraction are intended to enable support of cloud native execution environments.
Figure 1: DPDK evolving to support hardware abstraction and Cloud Native VNFs
The DPDK community delivered release 18.05, which supports a number of new capabilities, including:
- Dynamic memory scaling, interrupt driven mode, and shared memory packet interface provide high performing packet transmit and receive between application instances (e.g. container-to-container messaging).
- Application interfaces (ethdev, cryptodev, compressdev, security, eventdev, bbdev) that augment installed platform hardware with software equivalent functionality to support a consistent base level of functionality. The resulting multi-architecture, multi-vendor environment supports portability and high performance that delivers transparent access to the underlying visible platform.
- Wireless Baseband device (bbdev) provides a common programming framework that abstracts hardware accelerators based on FPGAs and/or Fixed Function Accelerators that assist with 3GPP physical layer processing.
- Updated Linux Networking sub-system (e.g. AF_XDP) sets up the OS and the NIC to virtualize transmission and receipt of ring buffers, while the received packets are Direct Memory Accessed (DMA’d) to the user space. This provides an option to take advantage of the OS standard isolation between different processes or containers, when compared with raw PCI access from user-space.
- Policy Based Power Management: Addresses “always on” network use cases with fine-grained power management. This supports potential savings via Low Power Modes or higher performance by increasing frequency to select cores.
The DPDK community is considering the following improvements for upcoming releases:
- Updated user space hardware drivers build on top of OS generic device support (i.e. mediated device framework) and accommodate legacy virtualized interfaces (e.g. VirtIO). By plumbing user space drivers underneath the DPDK interfaces, the framework saves on VM exit/entrance, supports live migration and eliminates the need to advance or develop additional virtual interfaces (e.g. VirtIO_crypto). This is also the case for NICs, accelerators, GPU, and FPGA acceleration devices.
- The proposed addition of a Resource Coordinator dynamically connects the available hardware instances to the targeted application instances in coordination with external orchestration or local configuration
Join the Collaboration, Drive Faster Toward Cloud Native Deployments
Intel works with our partners to address the challenges associated with VNF portability and onboarding process in a standard manner. The DPDK community defines and implements the framework for hardware abstraction. This will deliver the desired scalability while making new, performance enhancing features visible and available to the VNFs for high performance and performance per watt efficiency; fundamental tenets for NFV and Network Transformation.
Let’s collaborate and accelerate our journey to Cloud Native.
Follow me on Twitter @rgadiyar to continue the conversation, and visit https://www.intel.com/network for more information.
Get an overview of the solutions available to overcome NFV challenges like hardware and software dependencies in the briefing sheet "Delivering NFV Performance with Agility," so you and your team can deliver on your organization’s vision of NFV.