Hi Michael,

How many successes and failures are you seeing? We primarily use the 
clearwater-sip-stress package to check we haven’t introduced crashes under 
load, and to check we haven’t significantly regressed the performance of 
Project Clearwater. Unfortunately clearwater-sip-stress is not reliable enough 
to generate completely accurate performance numbers for Project Clearwater (and 
we don’t accurately measure Project Clearwater performance or provide numbers). 
We tend to see around 1% failures when running clearwater-sip-stress. If your 
failure numbers are fluctuating at around 1% then this is probably down to the 
test scripts not being completely reliable, and you won’t have actually hit the 
deployment’s limit until you start seeing more failures than this.

Thanks,
Graeme


From: Clearwater [mailto:clearwater-boun...@lists.projectclearwater.org] On 
Behalf Of ??????? ?ats?????
Sent: 16 September 2016 10:17
To: Clearwater@lists.projectclearwater.org
Subject: [Project Clearwater] Performance limit measurement

Hi all,

we are running Stress Tests against our Clearwater Deployment using Sip Stress 
node.
We have noticed that the results are not consistent as the number of 
successfull calls changes during repetitions of the same test scenario.

We have tried to increase the values of max_tokens , init_token_rate, 
min_token_rate and
target_latency_us but we did not observe any difference.

What is the proposed way to discover the deployment's limit on how many 
requests per second can
be served?

Thanks in advance,
Michael Katsoulis
_______________________________________________
Clearwater mailing list
Clearwater@lists.projectclearwater.org
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org

Reply via email to