It seems there are two basic questions being asked here.


Firstly, should you use an SBC at all?  In the interests of full disclosure, 
Metaswitch sell SBCs, so unsurprisingly, I'd say the answer is a resounding 
yes, and generally for the reasons that Richard has presented.



Any decent SBC is going to secure your network in a number of important ways.  
It also provides a key demarcation point separating access from core, or core 
from interconnect, and probably doing some crucial interworking of SIP 
dialects, codecs or encryption along the way.



Increasingly, SBCs also provide some form of reporting on what they are seeing. 
 Since they see both media and signalling flowing through your network, they 
are uniquely placed to give you visibility into what's going on, although the 
degree of functionality varies somewhat between different vendors' offerings.



The second question is, if you're going to deploy an SBC, does it have to be on 
proprietary hardware?  Again, full disclosure here - Metaswitch's Perimeta SBC 
is available as a VM, on COTS servers or on our own ATCA hardware.  
Unsurprisingly, therefore, our view would be that it absolutely does not have 
to be on proprietary hardware, and in fact limiting yourself to that option has 
significant drawbacks in terms of high costs and lost flexibility.



Around 2010, Intel made a strategic decision that they wanted to become a more 
significant player in the network space, and they started engineering their 
CPUs to move packets around faster.  An Intel server went from being able to 
forward less than 5 million packets per second to 160 million in the space of a 
few years, and this continues to grow. This is important because SBCs used to 
require FPGAs in the form of Network Processors in order to handle DOS attacks 
and forward traffic at line rates without introducing jitter and latency.  
Intel's investment means this is now a thing of the past, and that level of CPU 
performance is available whether you're deploying on bare metal or with a 
hypervisor.



If you do chose to deploy a virtual solution, there are other challenges.  As 
has been noted, whether it's VMware's ESXi or OpenStack's KVM, the hypervisor 
and in particular the virtual switch present in a virtual system acts as a 
bottleneck for packet processing.  At best, virtualization can just limit the 
performance you can achieve to that which can pass through your virtual switch. 
 At worst, contention means latency and jitter become unacceptably high.  Most 
real-time communications products available as virtual offerings today 
therefore mandate that they are deployed without contention, so as to avoid the 
worst case impacts.  However, there is significant ongoing investment here in 
order to try and realize the goal of having high-performance packet processing 
in a virtual environment.  Today, it is possible to achieve in the region of 
85% of the bare-metal throughput by using techniques like SR-IOV and DPDK, 
which are supported by most modern hardware.  These reduce the number of packet 
copies which are required, and provide direct connectivity between specific I/O 
queues on the hardware NICs and certain VMs, thereby bypassing the bottleneck 
of the virtual switch.  There is a trade-off, however, since you lose some of 
the benefits of virtualization like vMotion.



All is not lost, and there are already commercially available virtual switches 
which provide much higher throughput than the VMware native offering without 
losing functionality like vMotion and the flexibility they bring to your 
network.



In short - if you see the need for an SBC at all, you would be wise to ensure 
you at least consider solutions which are not tied to proprietary hardware.  
Software solutions are live today in the networks of Tier 3s right up to global 
Tier 1 operators, both on bare metal and virtualized.  They can scale as low as 
a few tens of sessions and as high as tens of thousands on a single server, and 
of course a well-designed cloud offering can elastically scale from one end of 
the spectrum to the other if required.


--

Mike Dell
+44 (0)20 8362 7062 - Office
+44 (0)79 2125 4094 – Mobile



From: uknof [mailto:[email protected]] On Behalf Of Richard Smith
Sent: 13 April 2016 17:19
To: Simon Woodhead; Ryan Finnesey
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [uknof] Virtualized SBC

On Thu, 7 Apr 2016 at 10:44 Simon Woodhead 
<[email protected]<mailto:[email protected]>> wrote:
Hi Ryan

No is the direct answer but I can lend some wider perspective which might be 
helpful.

Can you run media at scale in VMs?

Assuming we’re talking hypervisor virtualisation and a proper enterprise grade 
hypervisor like vSphere, rather than public cloud, then yes you can with 
caveats. There used to be concerns around CPU scheduling and the introduction 
of jitter but recent versions have dramatically improved that. There is jitter 
just by virtue of using a shared resource but how much depends on contention 
levels; on a platform that is not oversubscribed it is perfectly acceptable and 
not dissimilar to natural jitter levels over typical end-user connectivity. 
VMWare did some testing and research on this at various levels of subscription 
a few years ago which you might find useful: 
http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf

Most of our media is handled in hardware but where it is handled in software 
that is still on bare-metal with a few exceptions. Those exceptions are, for 
example, fax servers and even our office PBX. Performance from both is 
indistinguishable to bare-metal for practical purposes, albeit at much lower 
volume.

Of the handful of virtualised SBCs out there my experience is that they're 
designed to bridge the gap for smaller ITSPs between not having an SBC and 
getting a hardware SBC. They're not going to run tens of thousands of 
concurrent calls, but they will do the job they're designed to do until you 
need to upgrade to hardware.

Would I buy an SBC?

No

Really? Frankly I'm of the opinion that this is a bit in the 
'sailing-three-sheets-to-the-wind' kind territory of insane.

Don't get me wrong... For your internal call processing... by all means, if 
you've got the time, patience and skill then open-source all the way... I can 
see the tangible business and operational benefits to using open-source at the 
core... Not least of which is the fact that it's just nuts-and-bolts level of 
knowledge, which is priceless...

But when it comes to putting that core anywhere near the internet... SBCs all 
the way.

For SIP-to-SIP I firmly believe a properly configured open-source stack to be 
superior in _almost_ every way.

I would argue the contrary. In as much as a properly configured SIP stack is 
one of those holy grail type things. See point A below.

The only two selling points for a single box SBC over an open source stack that 
I see are:

- ticking the procurement box of a clean point of demarcation deployable right 
on the edge. This is trivial to do on open-source on commodity hardware but you 
can’t give a procurement bod a spec sheet, and to do it properly it won’t be a 
single box solution.

- hardware DSPs. You can obviously get hardware DSPs on a PCI card (we use them 
quite successfully where we can’t bypass media directly to hardware) but it is 
greatly preferable to handle it truly in hardware which an SBC does and they do 
it well.

Erm, not to put too find a point on it by you're missing some key points:

A) Sanity checking by default - the majority of the SBCs I've used do a modicum 
of sanity checking sessions. Normalising session data between customer 
endpoints and the core platform is A Good Thing[tm]. The core platforms has 
enough headaches to deal with already without having to brain-fart it's way 
through the SIP messaging produced by the various (invariably broken) versions 
of stack belonging to certain products that may or may not be related to an 
ascii character. Config fiddling to get this sanity checking and normalisation 
is in fact a holy grail quest.

B) Security by default - Most of the SBCs I've had the (dis)pleasure of working 
with operate on the principle of "if it's not required, don't enable it" in 
terms of sane defaults... Oracle (née ACME) defaults are relatively sane... I 
would personally tweak some of the DDoS settings and stuff relating to 
untrusted signalling and media, out-of-the-box defaults will get a network 
online and be reasonably secure. Sonus are in the same area...

Honestly, and I'm being sincere here, I'd like to an open-source SIP stack 
handle some of the DDoS's I've seen hit some of my customers and not impact 
call-processing performance.

C) Topology hiding - This is another one of the those Good Things[tm] - The 
less the outside world can work out from your signalling, the harder the job is 
for someone to compromise it. Proxies just don't cut the mustard...

D) Logical Separation of architecture - Access networks are inherently busy all 
the time... Trunking networks, are only busy when the majority of your customer 
base are on the phone... The two network architectures are diametrically 
opposed in terms of how you handle that traffic. Spreading that traffic over 
multiple proxies/endpoints/infrastructure gets tricky to manage... Especially 
in a failure scenario... SBCs inherently provide a separation boundary.

E) Forget about NAT. No really... just forget about it... No need to bother 
with this anymore...

Now, having said all of that I have in moments of weakness looked at these and 
quizzed vendor SEs. I believe if you feel you need an SBC they can do the job, 
just with the two compromises I mention above. Where I do struggle is on the 
performance claims particularly in terms of concurrent calls. The performance 
stats I’ve seen for similar things (e.g. 50k concurrent!) are trivial to 
achieve in SIP but in terms of media handling feel far closer to basic media 
proxy than a full B2BUA performing encryption and transcoding (all in 
software). Thus before committing I’d be wanting to test with real-world 
traffic and would fully expect throughput to be a fraction of the datasheet 
figure. If that transpires anything like other virtual versions of hardware 
appliances we’ve looked at in other fields I expect you’ll conclude the 
hardware version to be preferable!

Datasheets and real-world are always skewed. If you're going to be doing lots 
of sip manipulation and bit-fiddling with media, your milage is going to vary 
significantly.

Ultimately it all comes down to hardware... Whether you're using software or 
not... If you use commodity server hardware with a virtualised SBC or dedicated 
FPGAs in a fully hardware SBC.

The magic numbers that are going to dictate which are CPS and Concurrent 
Sessions.... Everything else is a negotiation...

 ~ Rich

Reply via email to