Can OpenStack Be Viable For Virtual CPE?
Virtual Customer Premise Equipment (vCPE) is a hot topic right now for companies focused on business opportunities in network virtualization. IHS Infonetics surveyed service providers worldwide and concluded that virtual Business CPE (vBCPE) was the top use case for Network Functions Virtualization (NFV) in 2016. A report from Analysys Mason concluded that vBCPE can generate new revenue of $1.4B in North America and Western Europe alone over a 5-year period, for service providers who are early adopters of this technology. All very compelling for service provider executives focused on improving top-line revenue while also reducing operational costs.
But alarm bells started ringing in Dusseldorf last October. At SDN and OpenFlow World Congress, Peter Willis from BT gave a presentation titled “How NFV is different from Cloud: Using OpenStack for Distributed NFV”, in which he identified six significant limitations of OpenStack which could potentially constrain its use for applications like vCPE. Peter’s comments were widely reported: see for example this Light Reading article which summarized his talk well.
At Wind River, we deliver NFV Infrastructure solutions, based on OpenStack, that enable ultra-high service reliability, optimized VNF performance and comprehensive VM lifecycle management. Our portfolio includes the Titanium Server CPE platform which as its name implies is optimized for small-footprint vCPE deployments at customer premises.
We’ve been working hard to ensure that our solutions address the OpenStack challenges that Peter has explained. On June 15th, we’ll be joining with Peter to present a webinar that explores these challenges in detail and explains how Titanium Server solves them.
If you’re interested in the topic of vCPE (and why else would you be reading this post?), then we recommend that you register now both to attend this event and to receive our white paper with full technical details.
Just in case you can’t wait for the webinar, in this post we’ll briefly summarize Peter’s six OpenStack issues. We hope you’ll join us for the webinar or read the white paper to learn about how they can be addressed.
Challenge #1: Binding Virtual Network Interface Cards to Virtual Network Functions
Some Virtual Network Functions (VNFs) require that their virtual Network Interface Cards (vNICs) be initialized in a specific order and with specific network connections (e.g. vNIC0 to the management network, vNIC1 to the auxiliary network, vNIC2 to the LAN and vNIC3 to the WAN). To ensure deterministic behavior of the vBCPE, it’s important to ensure that the correct VNF interface is always connected to the correct VNIC, especially after an interface has been disconnected and then reconnected. Testing with an off-the-shelf OpenStack distribution, however, reveals that in some cases the connections are restored incorrectly and that in others the VNF locks up.
Challenge #2: vCPE service chain modification
This is all about agility and the need to quickly reconfigure a vCPE service chain to add a new service ordered by a customer. For example, a customer who already has a router and a firewall might decide to add WAN acceleration.
OpenStack lacks the primitives required to reconnect the firewall interface from the router to the WAN accelerator. The only options are either (1) to delete the firewall interface and reconnect, which may lead to ambiguity because firewall rules are tied to a specific virtual NIC or (2) provision a new service chain from scratch which causes outage of at least five minutes.
Challenge #3: Scalability of OpenStack-based controllers to support hundreds of compute nodes
For enterprise-class vCPE deployments, a single OpenStack-based control node (or a pair of redundant nodes) could be required to support hundreds of compute nodes. These could be located remotely in individual customer premise locations or locally in a service provider’s data center or Central Office (CO). Community OpenStack distributions are not tested to ensure this level of scalability, so service providers only have two choices: either take on the burden of testing (and the inevitable patching) themselves or deploy an off-the-shelf distribution and accept the inherent risk or unproven scalability. Neither of these alternatives is acceptable.
Challenge #4: Start-up storms (or “stampedes”)
What happens when an optical fiber link is cut and then subsequently restored, so that hundreds or thousands of compute nodes then simultaneously attempt to attach to a centralized controller?
Typically, each compute node will be running several SSH sessions and the restoration process is both slow and computationally intensive.
Testing has shown that a standard OpenStack controller has insufficient resiliency to come with this scenario: it can become overloaded and never recover.
Challenge #5: Securing OpenStack over the Internet
In a vCPE scenario with centralized control nodes, located either in a data center or a service provider Central Office (CO), the control nodes communicate with the compute nodes over the Internet.
In a lab test that BT has referenced, over 500 pin holes had to be opened in the firewall to allow this connection scenario to work, including ports for VNC and SSH for CLIs. The firewall had to be reconfigured every time the dynamic IP address of a compute node changed, which happened several times during their testing.
OpenStack creates significant security challenges for this model of centralized control and distributed compute. The design of OpenStack presents too many attack vectors preventing it from being secured over the Internet.
Challenge #6: Backwards compatibility between releases
In a vCPE application, both the control nodes and the compute nodes are required to be running the same version of OpenStack. With new versions of OpenStack being released approximately every six months, it’s important that both control nodes and compute nodes can be updated simultaneously without incurring downtime or service outages.
We’ll be talking about how to solve these problems during our webinar on June 15th. Please register now and we’ll look forward to your participation as we this topic that is so important for the success of vCPE deployments.