Hi,
Are you not signed up to the mailing list? I've attached the response from
Wednesday, but that was sent to the mailing list so if you're not signed up
then you probably won't have seen it.
In order to ensure you see future responses, I'd recommend signing up by
clicking the "Mailing List Signup" link from here:
http://www.projectclearwater.org/community/
Thanks,
Seb.
From: Clearwater [mailto:[email protected]] On
Behalf Of JACKSON JULIET ROY
Sent: 03 March 2017 06:19
To: [email protected]
Subject: Re: [Project Clearwater] Throttling effect in sprout
Hi Clearwater team,
Can you please respond for the below query which I have posted on 27th Monday?
Thanks,
Jackson
From: JACKSON JULIET ROY
Sent: 27 February 2017 13:26
To: '[email protected]'
<[email protected]<mailto:[email protected]>>
Subject: Throttling effect in sprout
Dear All,
We are trying to evaluate the performance of single sprout node against two
sprout nodes in the Clearwater IMS for the SIP call load. (using multi VM based
CW IMS image) This is to demonstrate the necessity and advantage of scaling out
the sprout node.
So what we are attempting is to find out the overload point of single sprout
when calls start failing, whereas for the similar load two sprout scenario
works fine. We use SIPp to emulate calls. We normally generate from 50
calls/sec and keep in increasing till 100/150 calls per second.
However when we are experimenting the above scenario (first to find the
overload point for single sprout), we are finding inconsistent observation.
Some time single sprout creates call failures for even 50 calls/ sec, but if we
experiment after sometime single sprout is able to tolerate even more than 100
calls per seconds. Based on our study from CW website, we understand that this
might be due to dynamic throttling supported by CW IMS nodes. We have tried
to manipulate the value of max tokens as well as the token rate to ensure there
is no dynamic throttling, however couldn't find any visible improvement in the
observation. Due to this we are unable to fix the overload point for single
sprout hence couldn't able to do a testing for getting visible performance
improvement comparison between single and two sprout nodes.
I have few queries here. Can you please help us in getting clarity here?
1) Is there a possibility exist to disable dynamic throttling? If not, any
parameter setting e.g. token related parameter will ensure throttling is not
happening?
2) Is there a predefined overload point in terms of number SIP calls/ sec
for a single sprout? (we are suing single single core, 2 GB RAM, 20 GB hard
disk VM for sprout)
3) Is there any recommended parameters for this kind of testing - in
terms of SIP load ie calls/sec. any configuration parameter setting (like token
as well as any other)?
4) As per our understanding sprout is first component expected to be
scaled out because it will be more loaded than any other CW IMS nodes. Is our
understanding is correct?
5) When we scale out sprout, scaling out sprout alone is enough or any
other nodes also expected to be scaled out to see the performance improvement?
Thanks,
Jackson
--- Begin Message ---
Hi,
The overload point of a single sprout node will depend on the size of the node,
amongst other factors, and so we don't provide a predefined overload point. 50
calls/second is probably quite close to the limit already for a single CPU
sprout node.
As a general rule, if you want to see whether a sprout node is overloaded, the
most reliable measure is probably CPU usage on the node.
As for your comments regarding tokens: if you think that the tokens are
limiting the number of calls you can make, then you should be able to see this
fairly easily because any SIP messages which cannot be handled by Sprout (due
to not having any tokens available) will receive a 503 response. Also, you will
see "Status" logs from the load_monitor in /var/log/sprout/sprout_current.txt
which will tell you the state of the load monitor periodically. If the token
rate is the limiting factor, then you could either adjust the token settings
(as per below) or you could just ramp up the call load more slowly (i.e. rather
than going from 0 calls/second to 100 immediately, you slowly increase the call
rate from 0 to 100 to allow Sprout to adjust the token values).
For your specific questions:
1) You can't disable dynamic throttling using configuration options.
However, you can increase the initial maximum number of tokens as well as the
token fill rate. To do this, use the options listed here:
http://clearwater.readthedocs.io/en/latest/Clearwater_Configuration_Options_Reference.html,
specifically the "max_tokens" and "init_token_rate". Increasing the size of
these will increase the number of tokens available, and hence make throttling
happen later.
2) No, there's no predefined overload point.
3) The recommended settings would depend on your deployment. If you're
trying to stress Sprout specifically, then you may have issues if you're
directing your SIP traffic through the Bono node (which our older stress
scripts do). We don't believe that Bono scales as well as Sprout and may well
bottle-neck first, and so when performing stress testing we advise that you
should either stress test the IMS core directly (by using the new-style stress
scripts as documented here:
http://clearwater.readthedocs.io/en/latest/Clearwater_stress_testing.html?#sip-stress-nodes)
or you should use a carrier-grade P-CSCF as recommended for production
deployments (such as Metaswitch Perimeta)
4) As above, if your stress testing is going through Bono then it may well
be the bottle-neck here. Also, depending on your load profile and size of your
VMs, Homestead could also be a bottle-neck. If you think that homestead might
be limiting the load that Sprout can handle, then you should use more (or more
powerful) Homestead nodes to ensure that Homestead's not the bottle-neck. You
should be able to see if Sprout is the bottle-neck by looking at the CPU usage
on your Sprout nodes.
5) If you've ensured that Sprout is the bottle-neck in your deployment,
then scaling out Sprout alone should be enough (each node type can be scaled
our independently of the others).
I hope that helps,
Seb.
From: Clearwater [mailto:[email protected]] On
Behalf Of JACKSON JULIET ROY
Sent: 27 February 2017 07:56
To:
[email protected]<mailto:[email protected]>
Subject: [Project Clearwater] Throttling effect in sprout
Dear All,
We are trying to evaluate the performance of single sprout node against two
sprout nodes in the Clearwater IMS for the SIP call load. (using multi VM based
CW IMS image) This is to demonstrate the necessity and advantage of scaling out
the sprout node.
So what we are attempting is to find out the overload point of single sprout
when calls start failing, whereas for the similar load two sprout scenario
works fine. We use SIPp to emulate calls. We normally generate from 50
calls/sec and keep in increasing till 100/150 calls per second.
However when we are experimenting the above scenario (first to find the
overload point for single sprout), we are finding inconsistent observation.
Some time single sprout creates call failures for even 50 calls/ sec, but if we
experiment after sometime single sprout is able to tolerate even more than 100
calls per seconds. Based on our study from CW website, we understand that this
might be due to dynamic throttling supported by CW IMS nodes. We have tried
to manipulate the value of max tokens as well as the token rate to ensure there
is no dynamic throttling, however couldn't find any visible improvement in the
observation. Due to this we are unable to fix the overload point for single
sprout hence couldn't able to do a testing for getting visible performance
improvement comparison between single and two sprout nodes.
I have few queries here. Can you please help us in getting clarity here?
1) Is there a possibility exist to disable dynamic throttling? If not, any
parameter setting e.g. token related parameter will ensure throttling is not
happening?
2) Is there a predefined overload point in terms of number SIP calls/ sec
for a single sprout? (we are suing single single core, 2 GB RAM, 20 GB hard
disk VM for sprout)
3) Is there any recommended parameters for this kind of testing - in
terms of SIP load ie calls/sec. any configuration parameter setting (like token
as well as any other)?
4) As per our understanding sprout is first component expected to be
scaled out because it will be more loaded than any other CW IMS nodes. Is our
understanding is correct?
5) When we scale out sprout, scaling out sprout alone is enough or any
other nodes also expected to be scaled out to see the performance improvement?
Thanks,
Jackson
--- End Message ---
_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org