Hi Stanislav,

Apologies for the delayed reply. Whilst internally we do ensure that 
performance doesn't drop between Project Clearwater releases, we don't provide 
performance numbers for Project Clearwater, and we don't guarantee any level of 
performance. However, no, that doesn't sound like normal behaviour for Project 
Clearwater - when we're running high call loads through our system we normally 
drive CPU usage up to about 60%, and that's how we tend to evaluate stress 
tests. It sounds like Sprout has become overloaded a long time before reaching 
60% CPU, and has been restarted by our monitoring tool which deemed it 
unresponsive. This suggests that there is still something different about your 
environment, and Sprout's throttling mechanism is not tuned correctly. You may 
want to try tweaking some of the throttling options, such as reducing the value 
of max_tokens. See 
http://clearwater.readthedocs.io/en/stable/Clearwater_Configuration_Options_Reference.html
 for more information.

Metaswitch Networks do produce a hardened, supported version of Project 
Clearwater called Clearwater Core, and we do guarantee and provide performance 
numbers for each release of Clearwater Core.

Thanks,
Graeme

From: Clearwater [mailto:[email protected]] On 
Behalf Of Stanislav Khalup
Sent: 14 September 2016 13:37
To: [email protected]
Cc: Denis Plotnikov
Subject: Re: [Project Clearwater] Stress test results

Hello all,

I believe I am not the only one who is conducting stress testing. Could you 
please share you results: I am particularly interested in amount of concurrent 
calls that could be achieved with the smallest deployment.

BR,
Stanislav Khalup

From: Stanislav Khalup
Sent: Friday, September 9, 2016 6:09 PM
To: 
[email protected]<mailto:[email protected]>
Cc: Denis Plotnikov <[email protected]<mailto:[email protected]>>
Subject: Stress test results

Hello team,

At last after updating to the latest release, limiting all VMs to 1CPU/2GB Ram 
and setting all settings to default we managed to stop IMS cluster from 
crashes. The thing is we see the same results while performing stress tests. 
After some time the system closes socket regardless of ongoing calls. Is this 
kind of behavior normal for Clearwater IMS. I attach the full sipp logs and 
screens archive. Please look at graphic for network usage that shows downfall 
of traffic when sockets are being closed: 
https://www.dropbox.com/s/q2l3nsaxyu6sf32/sipp_stress.tar.gz?dl=0

Another question that is bothering me is how to interpret the results of stress 
testing? How can you evaluate that your deployment is hitting the limit? All 
the orchestration demos out there show that new nodes are added when CPU 
utilization reaches some 30% but in our test we could never see such loads even 
with 1-1 Bono and Sprout. Judging from the bucket algorithm description we 
assumed that refused connections metric should indicate that the system is at 
the limit but this snmp statistics is always zero in all our tests. So, how are 
you processing the results of stress test? What metrics are you looking at?

BR,
Stanislav Khalup
_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org

Reply via email to