What does “open” really mean in the context of NFV?
2015 was certainly an interesting year for Network Functions Virtualization (NFV). It was the year when it became clear that the early use cases will likely be virtual CPE applications that offer a compelling business case for Return-on-Investment (ROI) and which can be implemented with minimal technology risk. It was the year when the ETSI initiative moved beyond the initial architectural phase into detailed work on specific technical topics. And it was the year when everyone in the NFV community started trumpeting the message of “openness”, despite an apparent lack of consensus about what “open” actually means in this context.
As a supplier of a Carrier Grade NFV infrastructure platform based on open-source software, it’s important to us at Wind River that we understand exactly what the industry means by the term “open solutions”. We also need to know what service providers expect in terms of “openness”, since they’re the actual consumers of NFV solutions. So we were delighted to have the chance to sponsor a recent survey conducted by Telecom TV, which asked the question “What does ‘Open’ really mean?” for NFV.
The survey remains open, so there’s still time for you to submit your inputs and comments. If you’re even remotely involved in NFV, we’d encourage you to do so because the whole objective is to obtain feedback from as wide a community as possible.
In this post, we’ll summarize the key results to date and discuss some of their implications for NFV vendors. If the overall trends change significantly as more people respond, we’ll publish an updated summary later, reflecting the final set of answers and comments.
First, it’s always good to know who we’re listening to and in this case the respondents were around 33% Telecom Equipment Manufacturers (TEMs), 33% software vendors, 10% service providers and the rest “other” (probably media and analysts). So a reasonable mix of company types, though we look forward to getting a few more service provider responses.
Out of the 13 questions in the survey, possibly the most important is “What defines an ‘Open Solution’ for NFV?”. Interestingly, out of the five options available no single choice stands out as a runaway winner just yet. Within a few percentage points, all the following definitions are considered to be pretty much equally important:
- “Compatibility with ETSI industry-standard APIs”
- “Compatibility with de-facto standards such as OpenStack and DPDK”
- “Fully open source and licensed from an existing open-source project with vendor support”
- “’Download and use’ open source without vendor involvement”
- “All APIs in the platform are openly published”.
The survey next asks what other considerations are most important when developing a PoC and when deploying NFV in a commercial network. The results are very similar in each case and by a significant margin the #1 consideration is “Interoperability with solutions from other vendors”, with “Carrier Grade availability and reliability” a close second. This was not a surprise to us and it’s why we launched the Titanium Cloud ecosystem in 2014 to accelerate NFV deployments and minimize our customers’ schedule risk. Through Titanium Cloud, we work closely with our partners at an engineering level to make sure that their products work correctly with Titanium Server, while leveraging the features of Titanium Server to ensure Carrier Grade reliability and maximum performance.
While less important than interoperability and Carrier Grade reliability, the other considerations of interest to the survey respondents were:
- “Adherence to industry and de-facto standards”
- “Open source code”
- “Contributions to open source communities”
- “Technical support from suppliers/component vendors”.
The apparent importance of “de-facto standards” is interesting. With all the industry activity around the open-source Open Platform for NFV (OPNFV)
project, it’s worth considering what will happen once this leads to a stable code base leveraged by multiple vendors to create NFV solutions.
We would expect that at this point OPNFV becomes a de facto standard against which all NFV vendors will have to test their solutions.
Companies providing VNFs, for example, will need to verify the correct operation of those VNFs when running on the basic OPNFV code, just as they also validate them today on the NFVI platforms like Titanium Server that are actually deployed by their customers.
The survey clearly shows that the industry has realistic and pragmatic expectations about the features of pure open-source code. In answer to the question “Do you expect to get all the reliability and availability you need for a commercially deployed NFV solution from pure open source?”, so far a resounding two-thirds of respondents answered No.
It’s worth noting in this context that neither of the first two OPNFV releases, “Arno” and “Brahmaputra”, incorporates any features that contribute to delivering Carrier Grade reliability in the NFVI platform. This is an example of an area where a company like Wind River, with extensive experience in delivering six-nines (99.9999%) infrastructure, adds critical value.
Solutions such as Titanium Server build on community-driven reference code and enhance it with functionality that is an absolute requirement for platforms deployed in live service provider networks, while remaining fully compatible with all the applicable open standards.
The last question that we’ll highlight in this post refers to the aspects of NFV that are most impacted by open source solutions. Specifically, it asks “How important is an open solution when considering…” six different criteria. In this case, the clear frontrunners are “Avoidance of vendor lock-in” and “Interoperability”, reinforcing the importance of compatibility with both industry standards and de-facto standards that we mentioned earlier.
Also perceived as significant in terms of the answers to this question are “Time to market”, “Uptime / availability”, “VNF performance” and “Differentiation”.
The results of this survey already provide an interesting snapshot of the industry’s views on “openness” in NFV.
From the perspective of the service providers who are actually deploying NFV-based services in their networks, the benefits of compatible, interoperable solutions are significant. Seeing proof that the various elements of their end-to-end solution have been pre-validated to work together correctly accelerates their overall deployment time, while also reducing their schedule risk and enabling their program managers to get a little more sleep at night.
At the same time, compliance with open standards compels vendors to develop solutions that are interoperable and replaceable. This is key to avoiding the vendor lock-in that existed in the “bad old days” of physical network infrastructure, which after all was one of the main motivations that drove service providers worldwide to collaborate on NFV in the first place.
Disruptive technology changes like NFV represent a rare opportunity for aggressive service providers to grab market share from their competitors, thereby growing revenues and increasing their profitability. Solutions that are fully compatible with all the relevant open standards allow them to achieve that boost to their business while leveraging open-source solutions and avoiding and risk of vendor lock-in.
Finally, basing your NFV deployment on open-source software doesn’t have to mean that you end up with the same performance and reliability as your competitors. Choosing a platform that extends the open-source baseline with compatible, value-added enhancements allows you to differentiate yourself from the pack and grab market share during this exciting industry disruption.