Tommaso, 
 
As it sounds like you've seen, there is a Project Clearwater capacity 
spreadsheet at 
http://www.projectclearwater.org/technical/clearwater-performance/. 
 
The numbers in this spreadsheet are based on the results of stress testing 
using a standard SIPp script 
(https://github.com/Metaswitch/sprout/blob/icscf/tests/load/call_load2.xml),where
 the script simulates a pair of subscribers registering every 5 minutes and 
making a call to each other every 30 minutes. The script runs on Clearwater 
stress nodes against the bono cluster in a deployment, each node accounts for 
100,000 subscribers, and multiple stress nodes can be set up. 
 
This testing is done on a complete deployment then, rather than measuring 
individual components. The per-node results are calculated from a system 
running 3 million BHCA under a balanced load, and noting how many of each node 
are needed. 
 
We don't currently have any results from the stress testing we've done with 
SIPp that allow a comparison of different user profiles.

It's worth bearing in mind that the values in the spreadsheet are actually a 
little out of date now, as we've been making substantial changes to Clearwater 
recently, including adding additional function, and expect to continue for the 
next few sprints. Once these changes are in we expect to refresh the 
performance numbers.  Information on the stress testing, and details of how to 
set it up if you want to get some values out yourself, potentially modifying 
the tests to cover the profiles you're interested in, can be found at: 
https://github.com/Metaswitch/clearwater-docs/wiki/Clearwater-stress-testing.
 
In addition to the existing performance numbers and SIPp testing, we've also 
recently done some benchmark testing with IMSBench. A Clearwater deployment 
consisting of 11 m1.small nodes (3 bonos, 2 sprouts, 4 homesteads and 2 homers) 
was set up, and 100k subscribers were provisioned on the deployment. The 
scenarios attempted in the benchmarks tests split into 50% calls, 30% SIP 
MESSAGEs, 15% re-registrations, 2.5% new client registrations and 2.5% 
de-registrations.
 
The IMSBench tests ran successfully up to 340 scenario attempts per second.  As 
17.5% of the scenarios were registrations or re-registrations, this translates 
into ~60 re-/registrations per second on the deployment, while it was also 
running 1.2M BHCA. 
 
I hope this helps and let me know if you have any other questions.
 
Ellie


-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Tommaso 
Cucinotta
Sent: 09 December 2013 17:48
To: [email protected]
Subject: [Clearwater] Performance of ClearWater @ AWS

Hi,

I'd like to ask whether there's any standard measurement of how ClearWater 
performs when installed on Amazon instances.
I've seen a spreadsheet seemingly containing numbers referring to individual 
components measured/profiled autonomously, then some calculations / deductions 
made based on those figures. Instead, I'd like to know what numbers to expect 
(e.g., number of authenticated registrations per second) from a complete 
end-to-end interaction through bono, sprout, homestead and back to the user 
with, let's say, an installation with 1 (m1.small) VM per component.

Can anyone provide such experimental figures?

Thanks,

        T.
--
Tommaso Cucinotta, Computer Engineer PhD Researcher at Alcatel-Lucent Bell 
Laboratories Blanchardstown Business & Technology Park Dublin - Ireland 
_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/listinfo/clearwater
_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/listinfo/clearwater

Reply via email to