How many successes and failures are you seeing? We primarily use the
clearwater-sip-stress package to check we haven’t introduced crashes under
load, and to check we haven’t significantly regressed the performance of
Project Clearwater. Unfortunately clearwater-sip-stress is not reliable enough
to generate completely accurate performance numbers for Project Clearwater (and
we don’t accurately measure Project Clearwater performance or provide numbers).
We tend to see around 1% failures when running clearwater-sip-stress. If your
failure numbers are fluctuating at around 1% then this is probably down to the
test scripts not being completely reliable, and you won’t have actually hit the
deployment’s limit until you start seeing more failures than this.
From: Clearwater [mailto:clearwater-boun...@lists.projectclearwater.org] On
Behalf Of ??????? ?ats?????
Sent: 16 September 2016 10:17
Subject: [Project Clearwater] Performance limit measurement
we are running Stress Tests against our Clearwater Deployment using Sip Stress
We have noticed that the results are not consistent as the number of
successfull calls changes during repetitions of the same test scenario.
We have tried to increase the values of max_tokens , init_token_rate,
target_latency_us but we did not observe any difference.
What is the proposed way to discover the deployment's limit on how many
requests per second can
Thanks in advance,
Clearwater mailing list