broad business and digital transformation made possible particularly at the network edge.
Intel recently announced a broad portfolio of new solutions to further transform communication infrastructure into data-centric, cloud networks. Today, I am proud to share further details of our new 2nd generation Intel® Xeon® Scalable processors and complementary platform components specifically designed to accelerate the deployment of network functions virtualization (NFV) and cloud workloads anywhere in the network.
What makes these solutions game changing? The following requirements will realize the full potential of 5G platforms deployed on the network:
- Ensure high performance for key network workloads
- Leverage a programmable, scalable and efficient data plane architecture
- Support end-to-end quality of service (QoS) for network slicing
- Rationalize artificial intelligence (AI) services for machine learning and inferencing
- Support the creation, deployment and management of edge services
- Be secure
- Embrace new Ethernet solutions to support high-speed movement of data
Let’s take a deeper technical look at how this game-changing processor architecture achieves
these requirements and more.
High Performance for Key Network Workloads
The transition from fixed-function appliances to server-based virtualization consolidates diverse workloads on a single platform and architecture. Virtualized networks demand flexible allocation of resources to manage unbalanced workloads that result from this consolidation.
The 2nd generation Intel Xeon Scalable Gold 6200 series has processor SKUs specially designed for Network Applications. These ‘N’ SKUs deliver up to 50 percent performance improvement for various NFV workloads as compared to the previous generation Intel Xeon Scalable Gold 6100 series. These significant performance improvements result from more CPU cores, higher frequency for NFV applications and a new technology called Intel® Speed Select Technology – Base Frequency (Intel SST-BF) that increases base frequency on certain high-priority cores and lowers the base frequency on the rest of the CPU’s cores. This new power efficiency technology directs power where and when it is most needed by running lower priority workloads at a lower frequency and allocating higher priority workloads to higher frequencies. This unlocks incremental performance increases of up to 1.76x (1) for network workloads.
Intel® Optane™ DC Persistent Memory is a new class of server memory that delivers high capacity more economically than DRAM at near-DRAM performance. For example, DRAM memory accounts for almost 60 percent of the cost of a 1.5 TB server. Intel Optane DC Persistent Memory can reduce total memory cost by 44 percent (2).
When data is persistent (non-volatile) and available systems with large datasets, tables or databases are able to reboot almost instantly. By comparison applications using DRAM reload memory from external storage over the PCIe bus which can take more time. Applications that utilize in-memory databases, such as data analytics, subscriber databases, forwarding tables and billing systems, can see significant performance benefits with Intel Optane DC Persistent Memory. Intel demonstrated Apache Spark SQL analytics running 8x faster.
One great example of a network application that benefits from persistent memory is the Content Delivery Network (CDN), which stores cached content in multiple geographical locations in order to maximize proximity advantages and reduce latency. With the explosive growth of live, linear content, CDN operators look to memory instead of storage since live linear content is only buffered, not stored. When optimized with Intel Optane DC Persistent Memory, CDNs solutions can be more scalable, cost effective and performance optimized.
Programmable, scalable and efficient data plane architecture
In my recent blog, I outlined the reasons why the industry needs to continue to invest in efficient data plane architecture advancements that support the high-volume scaling of VNFs.
The new 2nd generation Intel Xeon Scalable processors provide a significant increase in network packet throughput and lower latencies compared to the kernel OVS for NFV use cases. It delivers over 600 Gbps of packet processing throughput in a dual socket platform. The additional cores deliver a 1.58x performance improvement in OVS-DPDK (3) packet switching as a baseline over first generation Intel Xeon Scalable processors.
No longer bound by I/O, the new platform scales with cores due to sufficient I/O and memory bandwidth. As we move to cloud native VNFs, packet processing performance will have to scale also. Today the Data Plane Development Kit (DPDK) meets the demanding performance requirements of the NFV appliances by having direct access to the network interface and memory subsystem. Cloud native environments will require a high level of abstraction in order to extract the benefits from cloud native applications.
AF Express Data Path (AF_XDP) accelerated by the Intel® Ethernet 800 Series with Application Device Queues (ADQ) are two crucial technologies to facilitating the packet scalability required to achieve the migration to cloud native environments. As a result, customers have the option to use DPDK and the ultimate performance and benefits of user space optimizations and/or utilize the new AF-XDP interface for flexibility and scalability.
End-to-end, Network Quality of Service (QoS)
We believe quality of service (QoS) is a fundamental network infrastructure requirement to support real time traffic profiles and critical 5G capabilities, such as network slicing. Our vision is to implement QoS in every subsystem on the 2nd generation Intel Xeon Scalable
The new Intel Xeon Scalable processors extend the Intel® Resource Director Technology (RDT) from Cache QoS to Memory QoS. Specifically, it implements Memory Bandwidth Measurement (MBM) and Memory Bandwidth Allocation (MBA) in addition to Cache Allocation
Technology (CAT) for fine grain QoS support on the platform. This, combined with the capabilities of Intel Ethernet controllers, will provide the end-to-end capabilities to deliver QoS for real time data streams including video applications.
AI-based Services for Machine Learning and Inferencing
Artificial intelligence (AI) has wide applications in networking and 5G, such as closed loop network automation, fault prediction, malware detection and video/media analytics at the edge. These network applications incorporate a variety of deep learning workloads, such as convolutional neural networks (CNN), recurrent neural networks (RNN) and reinforcement learning, and classic machine learning (ML) models like random forest and support vector machines (SVM). Performing model inference efficiently on CPUs is of critical interest to many users. For some use cases, the ability to perform in-memory compute directly on 3 TB of data per CPU socket using Intel Optane DC persistent memory avoids the substantial delay of reading and writing volumes of data to/from external storage (SSD, HDD) over PCIe bus, thus reducing analytical response times.
We have added new instructions in the new 2nd generation Intel Xeon Scalable processors called Intel® Deep Learning Boost to significantly improve the performance of these workloads. In particular, the Vector Neural New Instructions (VNNI) support INT8 operations with 4x theoretical gains over FP32. That is, for the models proven to have about the same accuracy with INT8 vs. FP32 for inference, an up to 4x gain in speed would be achieved. Learn more about this new technology.
Platform for creation, deployment and management of edge services
Edge computing is the placement of data center-grade network, compute and storage closer to endpoint devices to improve service capabilities and delivery, optimize total cost of ownership (TCO), comply with data locality requirements, and reduce application latency. Edge networks require pervasive virtualization and “cloudification” to support 5G use cases, such as industrial IoT, smart retail, immersive visual cloud applications and much more.
At MWC 2019, Intel announced the Open Network Edge Services Software (OpenNESS) open source toolkit to foster open collaboration and application innovation at the network and enterprise edge. OpenNESS will make it easier for cloud and IoT developers to engage with a worldwide ecosystem of hardware, software and solutions integrators to develop new 5G and edge use cases and services. Updated Intel Speed Select, Intel Resource Director Technology, Intel Optane DC persistent memory all help make the services delivered through OpenNESS more efficient and faster.
The new 2nd generation Intel Xeon Scalable processors feature built-in data protection through hardware-enhanced security to thwart malicious exploits, and maintain workload integrity with reduced performance overhead.
Intel® Boot Guard provides the hardware Root of Trust (RoT) for platform boot to cryptographically verify Intel CPU firmware prior to boot. Together, Intel Boot Guard and UEFI standard defined Secure Boot are implemented on Intel platforms to create a chain of trust for boot and to assure the platform owner that their authorized firmware is deployed on those platforms. Intel® Select Solution for Network Function Virtualization Infrastructure (NFVI) with Intel Boot Guard assists in ensuring hardware-rooted trust on the deployed infrastructure.
Hardware-Enabled Security and Resilience features include:
- Encryption (CPU & memory module)
- Intel® Security Libraries for Data Center (SecL)
- Platform attestation & data sovereignty from Boot thru VM via Intel® Trusted Execution Tech (TXT)
- Intel® Threat Detection Technology
- Intel® Select Solution for Hardened Security with Lockheed Martin
- Dual-PortIntel® Optane™ SSD DC 4800X (High availability to stored data for mission critical apps)
New Ethernet solutions to support high-speed movement of data
To move data faster, we are announcing the Intel Ethernet 800 Series which supports Ethernet speeds of up to 100 Gbps. It includes a new technology called Application Device Queues (ADQ). ADQ is like dedicated express lanes on the highway which get data to its destination faster with less interference. Running AF_XDP on the Intel Ethernet 800 Series with Application Device Queues on a 2nd generation Intel Xeon Scalable processor-based platform in our Intel lab, the team has observed initial results of packet processing performance. The packet processing exhibits near-linear scaling as cores are added, making the aggregate performance across many cores extremely valuable. The combination of accelerating AF_XDP with ADQ also lowers the number of CPU cores utilized by roughly half, freeing the processor cores for other more important tasks.
Advancing 5G, Edge and Cloud Native with Data-Centric Innovation
The new 2nd generation Intel Xeon Scalable processors are truly game changing. They deliver the performance, scalability and Quality of Service required for 5G. I look forward to engaging with the developer community to bring 5G to life through the course of 2019 and continue our network transformation journey. Visit intel.com/network for more information, and join me for more conversations on Twitter @rgadiyar.
(1) Performance results are based on testing by Intel as of 2/04/2019 and may not reflect all publicly available security updates. See configuration disclosure for details. No product or component can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.
(2) Performance results are based on testing: 8X (8/2/2018), 9X Reads/11X Users (5/24/2018), Minutes to Seconds (5/30/2018) and may not reflect all publicly available security updates. No product can be absolutely secure. See configuration disclosure for details. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/benchmarks.
- (3) Definitions of select virtual network functions (NVFs) referenced.
- vCCAP: Virtual Converged Cable Access Platform is a component that exchanges digital signals with cable modems on a cable network.
- Open vSwitch (OVS) is a software production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. DPDK libraries accelerate packet processing in Open vSwitch for both physical and virtual interfaces. http://docs.openvswitch.org/en/latest/
- vBNG/BRAS (Virtual Broadband Network Gateway/Broadband Remote Access Server): The vBRAS is a VNF (Virtual Network Function) approximation serving as Broadband Remote Access Server.
- vFW: The Virtual Firewall (vFW) is a VNF approximation serving as a state full L3/L4 packet filter with connection tracking enabled for TCP, UDP and ICMP.
- vCG-NAT: The Carrier Grade Network Address and port Translation is a VNF approximation extending the life of the service providers IPv4 network infrastructure and mitigate IPv4 address exhaustion by using address and port translation in large scale. It processes the traffic in both the directions. It also supports the connectivity between the IPv6 access network to IPv4 data network using the IPv6 to IPv4 address translation and vice versa.
- DPDK IPsec-GW forwarding is an approximation of IPSec Security gateway application using software with optimized AESNI instruction implementation