One router may kill the other router, sooner or later. The “Virtual Router” is showing the performance and power required for carrier grade networks.
So, if you are planning to acquire routers, read on, as you will get useful information on the options you have at hand, now.
In this blog, I question whether a commodity server/ x86 machine has the power and performance to host a virtual router application and if yes what are the advantages of virtual routers compared to the hardware ones.
But, before that, why virtualize a router in the first place?
Well, the NFV (Network Functions Virtualization) Use Cases document by ETSI has listed few use cases targeting different areas in provider’s space. Provider Edge routing is one of them. The throughput needs and services richness at the “Edge” makes provider edge routing an attractive target for NFV. (NFV is about virtualizing network element functions by using commodity servers)
OK so to clarify in the start, we are talking about “Provider Edge” router and NOT “Core Routers”.
Core Routers are “big beasts”. They need to process hundreds of Gbps at one time. They are not a target for virtualization, given the processing needs they have. At least, not yet.
But even then, let’s face it, is it really possible to virtualize a component as powerful as a router on commodity machine. Will it deliver the same throughput/performance needed for carrier grade routing? Are we compromising on some features/performance here?
There is no catch here, read on!
- One virtual router vendor demonstrates 80 Gbps throughput on a single processor.
- Another vendor follows by demonstrating 160 Gbps throughput on two processors running on a size as small as 2RU server.
In terms of line port throughput, we are talking about 10 Gbps per line interface; pretty impressive! Isn’t it?
While this throughput is certainly not sufficient for core routing needs, it is a definite “Yes” for Edge routing.
Before proceeding, we need to understand how a routing application is different compared to a normal IT cloud application.
How a routing application is different than an IT cloud application?
To run an application, a server needs both CPU resources and Network I/O ( input/output) resources.
A server can easily run IT cloud applications as cloud applications are CPU intensive but usually not Network I/O intensive.
On the other hand, a router has a control plane that requires high performance CPU and additionally, it has a data plane that needs faster Network interface. Therefore, routing, in general, is both CPU and Network I/O intensive application. This challenges performance of a server. A server that has powerful CPU would not help if it is not able to push packets across its interface as fast as a carrier grade router would do.
So let’s see what chipset vendors are doing to have a very powerful forwarding machine.
How chipset manufacturers are harnessing the power of processors?
Given the fast forwarding performance needed for routing applications, the processor manufacturers are doing innovations to help in that area.
So, I tried to probe more on how the chipset vendors are innovating to achieve this performance.
Just as an example, here are some details on how Intel is innovating to achieve high performance on processors
Intel®DPDK libraries and drivers
Intel DPDK gives libraries and drivers that enhance packet processing performance by 10 times. Some of the features of DPDK include
DPDK passes the packets from the line card to the code running in the userspace by completely bypassing the high-latency DRAM processing.
Reduces by a significant amount the time the operating system spends allocating and de-allocating buffers.
Provides an efficient mechanism so that packets may be placed into flows quickly for processing, thus greatly improving throughput.
So 10Gbps line throughput is quite realistic today. Achievable and available!
How about throughput in future?
Chipset vendors are constantly innovating and improving. The throughput will increase accordingly, so for sure the line throughput will not stay at 10Gbps. It will scale as more innovation happens in processing technology.
Shall you only consider Virtual Router for Provider Edge?
It is important to consider your routing needs/scale and advantages of virtual routing, before making decision.
For sub 100G throughput needs, a virtual router can deliver everything a physical router can. Therefore, it can be an attractive and a better option you can exercise(See benefits below). This is the performance achievable today; but this will definitely increase as processor performance increases in future.
In any case, the investment you make in the server today is a future proof investment. You can always re-purpose it if you need to move to a higher performance platform in future.
However, I do see some reluctance by traditional hardware router vendors that have introduced virtual routers to position virtual routers clearly. This is for the obvious reason of protecting some of their hardware business. A Customer is left to decide between a virtual and a physical router as they recommend both hardware and virtual router with the choice left to the customer.
The fact is that a virtual router can do everything a hardware router can do and much more!
- Advanced IPV4 /IPV6 routing and IP unicast and multicast.
- Layer 3 VPNs
- MPLS ( LDP, RSVP, P2MP LDP and RSVP)
- Layer 2 VPNs
- Deep Packet inspection
- Stateful firewall
Some vendors go further and offer Route reflector as Virtual Router.
So not only, can one consider the virtual router for Edge applications but also for aggregation areas.
What are the advantages of Virtual routers compared to hardware routers?
There are many:
Avoiding vendor lock in and quick service innovation:
No need to depend on vendors’ road maps to develop customized hardware and interfaces. Even the forwarding and control functions can be scaled independently. The customer buys services by up scaling their licenses only. The customer can ever share a server for multiple other functions without dedicating it for specific routing functions only. This could bring significant hardware optimization. This would slow down hardware expansion. This would lead to space and power conservation.
Pay as you Grow Model:
Start gradual and expand as you grow quickly. No need to buy pool of hardware cards in order to cope with quick and urgent expansion needs. Further, no need to wait for long lead times from your vendor. Upscale by adding commonly available x86 hardware resources and just add licenses as you need them
Downscale as well as upscale:
While up scaling is an obvious advantage of virtual environment, down scaling is also a benefit. If the need for one service has gone down and need for another service has gone up, no need to dismantle or add new hardware; An operator can protect its investment by sharing the same server to remove some or add other virtual machines. The customer can flexibly assign the processor and NIC resources among different applications.
Cost effective redundancy:
Owing to many types of edge equipment’s like routers,firewalls, load balancers, it is quite expensive to have equipment redundancy. This could be easily achieved by providing redundancy through common multiple servers. One can achieve even more than 1:1 redundancy in such cases as the equipment become a pool of resources thus providing higher level of redundancy.
Proof of Concepts:
Run tests on test servers for any new feature before moving the features to production. Try before buy becomes easy both for vendors and customers. Vendors’ benchmarks , SQT (System Quality Testing) will become a lot easier compared to pure hardware environment.
In conclusion, a Virtual Provider Edge Router is viable option today and available by multiple vendors. It has obvious CAPEX and OPEX advantage and is a right step towards cloudificaiton of network. Service providers should carefully look into this option for their future edge routing needs.
How about telling me your views on whether you see a virtual router, a good investment. Leave a comment below.