Network virtualization, software defined network (SDN), and network function virtualization (NFV) have taken the communication industry by storm. They are no longer buzzwords; the industry is seriously focusing and converging on them.
And why shouldn’t the industry focus on them? They are supposed to bring down OPEX and CAPEX by commoditizing the network elements. Commodity switches and servers will become future network elements instead of vendor’s proprietary gear. Therefore, hardware will lose its importance and, in contrast, software will become dominant and relevant.
Therefore, if you are in the communication/networking domain, you must align your current/future strategies around SDN/NFV, or else you are sure to miss the boat of opportunities these technologies promise.
This article touches upon some untold truths about SDN/NFV. More specifically, it points out that softwarization/virtualization is still in its infancy and therefore associated with some challenges. But these challenges also promise opportunities for some vendors to capitalize on.
What are SDN, NFV and network virtualization?
- SDN: SDN decouples the control plane of networking equipment from its forwarding plane. It centralizes the control, which makes the network more flexible, programmable, and centrally managed. Therefore it reduces the equipment complexity, reducing equipment to a dumb box.
- NFV: NFV virtualizes equipment functions and puts them on a commodity server or switch. For example, a router, a firewall, a NAT can become a virtual machine that can be run on commodity servers. This completely removes the dependency on vendor-proprietary hardware.
- Network Virtualization: The network virtualization’s concept comes from both SDN and NFV. When equipment becomes dumb and its management and control is taken to a central place, the network can run completely decoupled from the actual underlying hardware. The network then becomes completely elastic in terms of the resources it can provide. This is why we say that the network has virtualized. Network virtualization is a well known concept in the data center world.
Both SDN and NFV can be implemented separately, but most likely combining them in one network makes more sense. For example, even though NFV virtualizes network functions, the various functions need to communicate with each other. This is where SDN comes into play. More specifically, functions like firewall, DPI, and NAT, even when virtualized, have to be chained together as service flows from one element to the other. This demands a flexible networking protocol, and is where SDN can bring flexibility and assist NFV.
Together SDN and NFV, make the network flexible, agile, and truly programmable. This flexibility and programmability is a much bigger benefit compared to reducing the CAPEX and OPEX, as mentioned earlier.
However, this network flexibility comes with some challenges, which are described below. (From here on I will use virtualization as a general term for SDN, NFV, and network virtualization.)
Virtualization puts stress and burden on software as hardware becomes more fault prone
We are used to physical equipment, including vendor-specific switches and routers. The equipment is more predictable as it has vendor-designed ASICs that can give predictable throughputs. It has predicated high availability.
On the other hand, commodity switches and servers are more prone to faults and failures. We, therefore, have to build reliability/scalability into the software, not the hardware. Building carrier class reliability in software is something new in the industry; it is an early concept and a work in progress. This does not mean that it is not possible to provide the same resilience and reliability in the software as the hardware, but, as mentioned, it is still a work in progress. However, in the race to bring products to the market, some vendors have found proprietary means of hardening the software to provide carrier-class features.
Here’s the bottom line: the resilience and carrier-class reliability in software is challenging compared to the hardware. In other words, virtualization makes fault management/isolation difficult.
Virtualization, as mentioned earlier, is a step toward commoditization. This results in avoiding proprietary vendor lock. Then add to it the flexibility of running and picking the software network elements from different vendors and running them under one management domain. This would give further freedom to the customers.
However, running cross vendors platforms would also result in making fault management challenging and fault isolation difficult.
Come to think of it; in a service chain of different vendor elements, if there is a problem of latency and jitter, which network element is responsible for this problem. This would indeed be challenging and this would really demand a good fault management too.
Virtualization demands high performance from software
The virtual machines are elastics in nature and are supposed to scale out and scale in easily. There is a great need to monitor resource usage and how the virtual machines share the compute and storage resources. Any inadequate resource would lead to performance degradation. This is the job of resource orchestrator to make sure that every virtual machine gets a fair use of resources. Unlike the cloud environment, the virtual machines are stressed because of the data throughput scales needed. All this puts a greater demand on the performance capability of the management system and the underlying virtual compute and networking resources.
The performance management tools are not ready today to provide end-to-end visibility into the performance KPIs of different cross vendors’ virtualized platforms. Therefore, virtualization will be successful only in the presence of clear end-to-end fault management, performance analytics, and service assurance tools.
Owing to the elastic nature of virtualized network and dependency completely on the software to deliver end-to-end service, there is a need for good analytics and performance tools that can show the performance of virtualized infrastructure and virtual network functions.
The importance and need for FCAPS (fault, configuration, accounting, performance and security) management tools has never been so greater in the history of networks.
The network management has suddenly become the most important piece today. Dealing with virtual networks is not as easy as dealing with the physical counterparts. Dealing with cross vendors virtual elements is not as easy as dealing with the physical counterparts. There has to be a way to isolate faults and there has to be a way to give consistent quality of service to different kind of services.
SDN/NFV may be an opportunity for new breed of analytics and service assurance vendors
SDN/NFV may be an opportunity for new breed of analytics vendors that are specialized in monitoring the virtual resources. Those that can present the end-to-end QoS performance indicators. Those that can isolate and present faults among different vendor elements. Those that can show the service impacting faults like latency and Jitter variations in end to end service. And those that can provide service assurance for virtual services.
If we look at the market today, the main NFV vendors do provide FACPS tools but very basic, limited and tested only on their own virtual infrastructure.
Therefore, there would be need for very specialized vendors that can work on such kind of analytic tools with all sorts of vendors.
For the current analytics vendors it is an opportunity to think beyond the monitoring tools they have and think about the future where the physical networks would co-exist for some time before being phased out to the virtualized world.