Can you tell me more about your scenario? It sounds like you’re not using the
clearwater-sip-stress package, or at least not in exactly the form we package
up. If you’re not using the clearwater-sip-stress package then please can you
send details of your stress scenario?
Depending on how powerful your Sprout node is, I would expect 15000 calls per
second to be towards the upper limit of its performance powers. However, if the
CPU is not particularly high then that would suggest that Sprout’s throttling
controls might require further tuning. Do you know what return code the
“unexpected messages” have? 503s indicate that there is overload somewhere.
Sprout does adjust its throttling controls to match the load its able to
process, but that process is not immediate, and we recommend building stress up
gradually rather than immediately firing 15000 calls per second into the system
– for more information on that, see
One final thought I had was that the node you’re running stress on might be
overloaded. If the stress node is not responding to messages in a timely
fashion then that will generate time outs and unexpected messages.
From: Clearwater [mailto:clearwater-boun...@lists.projectclearwater.org] On
Behalf Of ??????? ?ats?????
Sent: 16 September 2016 15:16
Subject: Re: [Project Clearwater] Performance limit measurement
thanks a lot for your response.
In our scenario we are using the Stress node to generate 15000 calls in 60
seconds. The number of
unsuccessful calls varies from ~500 to ~5000 even in subsequent repetitions of
the same scenario.
According to wireshark the failures happen because of Sprout that does not send
the correct responses in time
and so we get "time-outs" and "unexpected messages" in the Stress node.
The Sprout node has sufficient CPU and memory resources.
What could be the reason of this instability in our deployment?
Thank you in advance,
2016-09-16 16:14 GMT+03:00 Graeme Robertson
How many successes and failures are you seeing? We primarily use the
clearwater-sip-stress package to check we haven’t introduced crashes under
load, and to check we haven’t significantly regressed the performance of
Project Clearwater. Unfortunately clearwater-sip-stress is not reliable enough
to generate completely accurate performance numbers for Project Clearwater (and
we don’t accurately measure Project Clearwater performance or provide numbers).
We tend to see around 1% failures when running clearwater-sip-stress. If your
failure numbers are fluctuating at around 1% then this is probably down to the
test scripts not being completely reliable, and you won’t have actually hit the
deployment’s limit until you start seeing more failures than this.
On Behalf Of ??????? ?ats?????
Sent: 16 September 2016 10:17
Subject: [Project Clearwater] Performance limit measurement
we are running Stress Tests against our Clearwater Deployment using Sip Stress
We have noticed that the results are not consistent as the number of
successfull calls changes during repetitions of the same test scenario.
We have tried to increase the values of max_tokens , init_token_rate,
target_latency_us but we did not observe any difference.
What is the proposed way to discover the deployment's limit on how many
requests per second can
Thanks in advance,
Clearwater mailing list
Clearwater mailing list