In this new world, virtual network functions (VNFs) will run in virtual machines (VMs) or containers, with either approach serving as the unit of orchestration and management for VNFs. Intel is contributing to open source projects that cover both approaches and the associated infrastructure and tools.
In conjunction with the Open Platform for NFV (OPNFV) community, Intel has done a lot to mature VM-based NFV infrastructure (NFVi). This work focuses on ensuring that data plane applications running in NFVi can achieve maximum performance and meet or exceed expectations around quality of service (QoS) and high availability.
In a parallel effort, we have been working to develop NFVi that can use containers as the unit of orchestration and management. We will show an example of this approach in a demo at this week’s ANGA COM exhibition and congress in Cologne, Germany. In this demo, we will have an Intel data plane platform running a sample data plane application on an industry-standard, high-volume server that incorporates the latest Intel technologies, such as improved Hyper-Threading technology, improved instruction execution and new instructions, and a new cache hierarchy.
This demo will also showcase the capabilities of Kubernetes, an open source project for container orchestration and management, first developed by Google. It was written with both cloud and enterprise applications in mind, although not necessarily for VNFs or other apps running in the data plane. Intel is an active contributor to the Kubernetes project because we want to make sure that the features and general infrastructure are there to help organizations gain the full value of the capabilities of the server hardware platform when running data plane applications. And to simplify life for developers, we also want to make sure that this support is either transparent to or integrated seamlessly with upper layers of management and control.
In the demo, the application runs inside of several containers that have been automatically orchestrated by Kubernetes for maximum performance in terms of packet throughput and latency. This performance is repeatable and deterministic, which means that from flow to flow, you can expect approximately the same throughput and latency, which would allow a service provider to define a QoS metric for this application.
We have started exposing the right metrics from the underlying server platform to enable QoS by upstreaming modifications to these open community projects - Kubernetes and DPDK. The demonstration exposes the telemetry necessary to enable automation of future networks. When Kubernetes is aware of the server platform and its capabilities, it can schedule containers more intelligently, which leads to higher packet throughput per core and lower latency per packet. Also we find that this performance is repeatable and deterministic.
Collectively, these new features will allow Kubernetes to understand the capabilities of the underlying hardware platforms and automatically orchestrate VNFs (i.e. data plane applications) to run at maximum capacity and meet quality of service requirements for the network. Anyone running data plane applications can take advantage of these features to get greater value from Intel-based server platforms — and accelerate the cable network transformation.
For a look at some of the other steps Intel is taking to foster a vibrant, open source ecosystem and drive network transformation, visit our meeting room Hall 7 MS10 at ANGA COM and visit us at Intel® Network Builders.
Intel is also showing the first live demonstration of Full Duplex DOCSIS, from cloud to client, at ANGA COM 2017. Keith Wehmeyer, GM of Intel’s Cable Business Line, recently described the role Intel is playing in Full Duplex DOCSIS technology and how it paves the way to multigigabit speeds.