Ryan, Simon,
Sorry noticed this trail, a few weeks back and didn’t get to respond at the 
time.

It seems there are two basic questions being asked here.

Firstly, should you use an SBC at all?  In the interests of full disclosure, 
Metaswitch sell SBCs, so unsurprisingly, I'd say the answer is a resounding 
yes, and generally for the reasons that Ryan has presented.  

Any decent SBC is going to secure your network in a number of important ways.  
It also provides a key demarcation point separating access from core, or core 
from interconnect, and probably doing some crucial interworking of SIP 
dialects, codecs or encryption along the way.  

Increasingly, SBCs also provide some form of reporting on what they are seeing. 
 Since they see both media and signalling flowing through your network, they 
are uniquely placed to give you visibility into what's going on, although the 
degree of functionality varies somewhat between different vendors' offerings.

The second question is, if you're going to deploy an SBC, does it have to be on 
proprietary hardware?  Again, full disclosure here - Metaswitch's Perimeta SBC 
is available as a VM, on COTS servers or on our own ATCA hardware.  
Unsurprisingly, therefore, our view would be that it absolutely does not have 
to be on proprietary hardware, and in fact limiting yourself to that option has 
significant drawbacks in terms of high costs and lost flexibility.

Around 2010, Intel made a strategic decision that they wanted to become a more 
significant player in the network space, and they started engineering their 
CPUs to move packets around faster.  An Intel server went from being able to 
forward less than 5 million packets per second to 160 million in the space of a 
few years, and this continues to grow. This is important because SBCs used to 
require FPGAs in the form of Network Processors in order to handle DOS attacks 
and forward traffic at line rates without introducing jitter and latency.  
Intel's investment means this is now a thing of the past, and that level of CPU 
performance is available whether you're deploying on bare metal or with a 
hypervisor.

If you do chose to deploy a virtual solution, there are other challenges.  As 
has been noted, whether it's VMware's ESXi or OpenStack's KVM, the hypervisor 
and in particular the virtual switch present in a virtual system acts as a 
bottleneck for packet processing.  At best, virtualization can just limit the 
performance you can achieve to that which can pass through your virtual switch. 
 At worst, contention means latency and jitter become unacceptably high.  Most 
real-time communications products available as virtual offerings today 
therefore mandate that they are deployed without contention, so as to avoid the 
worst case impacts.  However, there is significant ongoing investment here in 
order to try and realize the goal of having high-performance packet processing 
in a virtual environment.  Today, it is possible to achieve in the region of 
85% of the bare-metal throughput by using techniques like SR-IOV and DPDK, 
which are supported by most modern hardware.  These reduce the number of packet 
copies which are required, and provide direct connectivity between specific I/O 
queues on the hardware NICs and certain VMs, thereby bypassing the bottleneck 
of the virtual switch.  There is a trade-off, however, since you lose some of 
the benefits of virtualization like vMotion.  

All is not lost, and there are already commercially available virtual switches 
which provide much higher throughput than the VMware native offering without 
losing functionality like vMotion and the flexibility they bring to your 
network.

In short - if you see the need for an SBC at all, you would be wise to ensure 
you at least consider solutions which are not tied to proprietary hardware.  
Software solutions are live today in the networks of Tier 3s right up to global 
Tier 1 operators, both on bare metal and virtualized.  They can scale as low as 
a few tens of sessions and as high as tens of thousands on a single server, and 
of course a well-designed cloud offering can elastically scale from one end of 
the spectrum to the other if required.

Hope this is still a relevant perspective

Regards,
Nick


-----Original Message-----
From: uknof [mailto:[email protected]] On Behalf Of Simon 
Woodhead
Sent: 07 April 2016 10:41
To: Ryan Finnesey
Cc: [email protected]
Subject: Re: [uknof] Virtualized SBC

Hi Ryan

No is the direct answer but I can lend some wider perspective which might be 
helpful.

Can you run media at scale in VMs? 

Assuming we’re talking hypervisor virtualisation and a proper enterprise grade 
hypervisor like vSphere, rather than public cloud, then yes you can with 
caveats. There used to be concerns around CPU scheduling and the introduction 
of jitter but recent versions have dramatically improved that. There is jitter 
just by virtue of using a shared resource but how much depends on contention 
levels; on a platform that is not oversubscribed it is perfectly acceptable and 
not dissimilar to natural jitter levels over typical end-user connectivity. 
VMWare did some testing and research on this at various levels of subscription 
a few years ago which you might find useful: 
http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf

Most of our media is handled in hardware but where it is handled in software 
that is still on bare-metal with a few exceptions. Those exceptions are, for 
example, fax servers and even our office PBX. Performance from both is 
indistinguishable to bare-metal for practical purposes, albeit at much lower 
volume.

Would I buy an SBC?

No, unless I had specific transport changes (where I didn’t understand one of 
them!) or to give certified compatibility with something non-standard such as 
Lync. For SIP-to-SIP I firmly believe a properly configured open-source stack 
to be superior in _almost_ every way. The only two selling points for a single 
box SBC over an open source stack that I see are:

- ticking the procurement box of a clean point of demarcation deployable right 
on the edge. This is trivial to do on open-source on commodity hardware but you 
can’t give a procurement bod a spec sheet, and to do it properly it won’t be a 
single box solution.

- hardware DSPs. You can obviously get hardware DSPs on a PCI card (we use them 
quite successfully where we can’t bypass media directly to hardware) but it is 
greatly preferable to handle it truly in hardware which an SBC does and they do 
it well.

Do either of those selling points apply to a vSBC?

No! By definition it will not be sitting on the edge, it’ll be sitting quite 
deeply inside the network and rather than being a clean point of demarcation, 
it’ll be dependent on hypervisor security and any trunking (or God forbid 
overlay network) used to reach the virtual NIC. 

Similarly, there is no hardware DSP as there is no hardware. All transcoding 
(and for that matter encryption) will be done in software, in shared resource 
with additional layers underneath.

Now, having said all of that I have in moments of weakness looked at these and 
quizzed vendor SEs. I believe if you feel you need an SBC they can do the job, 
just with the two compromises I mention above. Where I do struggle is on the 
performance claims particularly in terms of concurrent calls. The performance 
stats I’ve seen for similar things (e.g. 50k concurrent!) are trivial to 
achieve in SIP but in terms of media handling feel far closer to basic media 
proxy than a full B2BUA performing encryption and transcoding (all in 
software). Thus before committing I’d be wanting to test with real-world 
traffic and would fully expect throughput to be a fraction of the datasheet 
figure. If that transpires anything like other virtual versions of hardware 
appliances we’ve looked at in other fields I expect you’ll conclude the 
hardware version to be preferable!

Hope that helps!
Simon


> On 7 Apr 2016, at 02:12, Ryan Finnesey <[email protected]> wrote:
> 
> Has any more worked with products similar to 
> http://www.sonus.net/products/session-border-controllers/virtualized-sbc-swe
> 
> What has your experience been?
> 
> Cheers
> Ryan
>       
> 


Reply via email to