Hi,
I got errors while stress testing. I used 40000 subscribers for 30 min
duration but it is running completing, what is the reasons behind this, is
there some bug in script itself or other problem:

*[]ubuntu@stress:~$ /usr/share/clearwater/bin/run_stress ims.com
<http://ims.com> 40000 30 --multiplier=16*
Starting initial registration, will take 500 seconds
Initial registration succeeded
Starting test
Test complete

Elapsed time: 00:02:06
Start: 2018-03-28 06:18:00.636804
End: 2018-03-28 06:20:07.829051

Total calls: 6989
Successful calls: 1391 (19.9027042495%)
Failed calls: 3953 (56.5603090571%)
Unfinished calls: 1645

Retransmissions: 0

Average time from INVITE to 180 Ringing: 15092.0ms
# of calls with 0-2ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 2-10ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 10-20ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 20-50ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 50-100ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 100-200ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 200-500ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 500-1000ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 1000-2000ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 2000+ms from INVITE to 180 Ringing: 3008 (43.0390613822%)
*Failed: call success rate 19.9027042495% is lower than target 100.0%!*
*Traceback (most recent call last):*
*  File "/usr/share/clearwater/bin/run_stress", line 443, in <module>*
*    reg_success_rate = 100 * float(row['SuccessfulCall(C)']) /
float(row['TotalCallCreated'])*
*ZeroDivisionError: float division by zero*



please provide some solution. I have created 50K subscribers in vellum.
Using single node for sprout and for other also.

How can i increase the no. of nodes for sprout and for other also?


Thanks


On Tue, Mar 27, 2018 at 5:26 PM, Sunil Kumar <[email protected]> wrote:

> Thanks Robert for replying,
>
> Yes, the file is there but it is empty..
>
> [sprout]ubuntu@sprout:/var/log/sprout$ ls
> access_20180326T180000Z.txt  access_20180327T000000Z.txt
> access_20180327T060000Z.txt  access_20180327T120000Z.txt
> access_20180327T180000Z.txt  sprout_20180327T150000Z.txt  *sprout_err.log*
> access_20180326T190000Z.txt  access_20180327T010000Z.txt
> access_20180327T070000Z.txt  access_20180327T130000Z.txt
> access_20180327T190000Z.txt  sprout_20180327T160000Z.txt  sprout_out.log
> access_20180326T200000Z.txt  access_20180327T020000Z.txt
> access_20180327T080000Z.txt  access_20180327T140000Z.txt
> access_current.txt           sprout_20180327T170000Z.txt
> access_20180326T210000Z.txt  access_20180327T030000Z.txt
> access_20180327T090000Z.txt  access_20180327T150000Z.txt  analytics.log
>             sprout_20180327T180000Z.txt
> access_20180326T220000Z.txt  access_20180327T040000Z.txt
> access_20180327T100000Z.txt  access_20180327T160000Z.txt
> sprout_20180327T130000Z.txt  sprout_20180327T190000Z.txt
> access_20180326T230000Z.txt  access_20180327T050000Z.txt
> access_20180327T110000Z.txt  access_20180327T170000Z.txt
> sprout_20180327T140000Z.txt  sprout_current.txt
>
> *[sprout]ubuntu@sprout:/var/log/sprout$ cat sprout_err.log*
> [sprout]ubuntu@sprout:/var/log/sprout$
>
> Regards,
> Sunil
>
>
> On Tue, Mar 27, 2018 at 5:15 PM, Robert Day <[email protected]>
> wrote:
>
>> Hi Sunil,
>>
>>
>>
>> Are there any ‘Error’-level logs in /var/log/sprout or /var/log/homestead
>> on the VMs? That might suggest what’s causing the 503 Service Unavailable
>> error.
>>
>>
>>
>> Best,
>>
>> Rob
>>
>>
>>
>> *From:* Sunil Kumar [mailto:[email protected]]
>> *Sent:* 26 March 2018 12:28
>> *To:* [email protected]; Michael Duppré <
>> [email protected]>
>> *Subject:* Re: [Project Clearwater] CW team please help - stress testing
>>
>>
>>
>> Hi Michael,
>>
>> I am not able to make call got 503 service unavailable. can you please
>> check and guide me some quick solution
>>
>>
>>
>> []ubuntu@stress:~$ /usr/share/clearwater/bin/run_stress ims.com 100 2
>>
>> Starting initial registration, will take 1 seconds
>>
>> Initial registration succeeded
>>
>> Starting test
>>
>> Test complete
>>
>>
>>
>> Elapsed time: 00:01:50
>>
>> Start: 2018-03-26 23:18:41.605000
>>
>> End: 2018-03-26 23:20:32.453238
>>
>>
>>
>> Total calls: 2
>>
>> Successful calls: 0 (0.0%)
>>
>> Failed calls: 2 (100.0%)
>>
>> Unfinished calls: 0
>>
>>
>>
>> Retransmissions: 0
>>
>>
>>
>> Average time from INVITE to 180 Ringing: 0.0ms
>>
>> # of calls with 0-2ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 2-10ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 10-20ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 20-50ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 50-100ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 100-200ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 200-500ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 500-1000ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 1000-2000ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 2000+ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> Failed: call success rate 0.0% is lower than target 100.0%!
>>
>>
>>
>> Total re-REGISTERs: 6
>>
>> Successful re-REGISTERs: 6 (100.0%)
>>
>> Failed re-REGISTERS: 0 (0.0%)
>>
>>
>>
>> REGISTER retransmissions: 0
>>
>>
>>
>> Average time from REGISTER to 200 OK: 59.0ms
>>
>>
>>
>> Log files at /var/log/clearwater-sip-stress/12890_*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> []ubuntu@stress:~$ cat /var/log/clearwater-sip-stress
>> /12890_caller_errors.log
>>
>> sipp: The following events occured:
>>
>> 2018-03-26      23:19:37.070794 1522086577.070794: Aborting call on
>> unexpected message for Call-Id '[email protected]': while expecting
>> '183' (index 2), received 'SIP/2.0 503 Service Unavailable
>>
>> Via: SIP/2.0/TCP 127.0.1.1:34112;received=10.22
>> 4.61.34;branch=z9hG4bK-12900-1-0
>>
>> Record-Route: <sip:scscf.sprout.ims.com;tran
>> sport=TCP;lr;billing-role=charge-term>
>>
>> Record-Route: <sip:scscf.sprout.ims.com;tran
>> sport=TCP;lr;billing-role=charge-orig>
>>
>> Call-ID: [email protected]
>>
>> From: <sip:[email protected]>;tag=12900SIPpTag001
>>
>> To: <sip:2010000041@ ims .com>;tag=z9hG4bKPjRhUbQK8NJ5B
>> lp4qIYX7f1YWhM2FQZe2M
>>
>> CSeq: 1 INVITE
>>
>> P-Charging-Vector: icid-value="12900SIPpTag001";orig-ioi= ims.com;term-ioi=
>> ims.com
>>
>> P-Charging-Function-Addresses: ccf=0.0.0.0
>>
>> Content-Length:  0
>>
>>
>>
>>
>>
>> PFA the sprout log
>>
>>
>>
>> thanks,
>>
>>  sunil
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Mar 23, 2018 at 11:26 PM, Michael Duppré <
>> [email protected]> wrote:
>>
>> Hi Sunil,
>>
>>
>>
>> Sorry to hear that you’ve found our reply too slow to be helpful this
>> time, but I’m glad to hear that you seem to have made some progress on your
>> own. As Jim mentioned, our mailing list is `best effort` while we focus on
>> development work to keep the quality of Clearwater as high as possible –
>> thanks, Jim, I’m happy to hear that you like our free service!
>>
>>
>>
>> Regarding your questions:
>>
>> ·  You’ve mentioned that you have hit other problems with stress tool,
>> could you please head over to https://github.com/Metaswitch/
>> project-clearwater-issues/issues and open up an issue with some specific
>> diagnostics? That way we can track and prioritize the problem to get it
>> fixed as soon as possible.
>>
>> ·  It sounds like you now got some calls working, that’s good news!
>> Could you please share your stress tool and sprout logs to pinpoint if this
>> is a problem with the generation of load or with Clearwater handling the
>> calls? Also, you’ve mentioned that you’ve made some changes to the tool,
>> what exactly did you change? I’d just like to be sure that none of your
>> changes are causing this behaviour.
>>
>> ·  About the `multiplier` argument: Thanks for pointing out that our
>> documentation is missing something useful here, we will update it. For now,
>> if you take a look at the help text of the stress tool by running
>> `/usr/share/clearwater/bin/run_stress --help` you can find the
>> explanation:
>>
>> o *--multiplier MULTIPLIER: Multiplier for the VoLTE load profile (e.g.
>> passing 2 here will mean 2.6 calls and 4 re-registers per subs per hour)*
>>
>> For more information about the VoLTE load profile you can have a look at
>> the documentation for the stress tool at http://clearwater.readthedocs.
>> io/en/stable/Clearwater_stress_testing.html, specifically:
>>
>> o *[The stress tool will] send traffic, using a fixed load profile of
>> 1.3 calls/hour for each subscriber (split equally between incoming and
>> outgoing calls) and 2 re-registrations per hour*
>>
>>
>>
>> Good luck with your next steps using Clearwater! Hope you won’t hit any
>> other problems, but if you do, please let us know!
>>
>>
>>
>> Kind regards,
>>
>> Michael
>>
>>
>>
>>
>>
>> *From:* Jim Page [mailto:[email protected]]
>> *Sent:* 23 March 2018 09:20
>> *To:* [email protected]
>> *Cc:* Michael Duppré <[email protected]>
>>
>>
>> *Subject:* Re: [Project Clearwater] CW team please help - stress testing
>>
>>
>>
>> Dude, this is a free service. I am sure if you buy some of their
>> commercial licenses they will prioritise your request, but if you are on
>> here you need to understand that service given here is ‘best effort’, and
>> in my view they perform a fantastic job. So calm down.
>>
>>
>>
>> *RedMatter Ltd*
>>
>> *Jim Page*
>> *VP Mobile Services*
>>
>> +44 (0)333 150 1666 <+44%20333%20150%201666>
>> +44 (0)7870 361412 <+44%207870%20361412>
>>
>> [email protected]
>>
>>
>>
>> On 23 Mar 2018, at 02:29, Sunil Kumar <[email protected]> wrote:
>>
>>
>>
>> Hi,
>>
>> Thanks for replying, but you guys are replying very late, its not good, I
>> have been waiting for your reply from last 3 days :-( .
>>
>> Anyway, I thought the script hast other problem also, May be you will
>> check it and fix it so that others wouldn't got that problem. Somehow I fix
>> the problem, though it takes lot of of time to read the script and make
>> some changes.
>>
>>
>>
>> I want ask few questions and *expecting reply within a day* :-)
>>
>> 1. when I use 1000 subscriber and running for 10 min duration, only few
>> call are successful (around 300) and no calls are failed. How can I
>> increase no. of calls.
>>
>> 2. Can you explain the exact use of* --multiplier *parameter in detail.
>> I request you to add all the parameter in doc itself so other would not get
>> problem while finding.
>>
>>
>>
>> Thanks,
>>
>> Sunil
>>
>>
>>
>>
>>
>> On Thu, Mar 22, 2018 at 8:09 PM, Michael Duppré <
>> [email protected]> wrote:
>>
>> Hello Sunil,
>>
>>
>>
>> Sorry about the stress tool not working properly with a lower number of
>> subscribers, that looks like a bug in the tool. I have raised issue
>> https://github.com/Metaswitch/project-clearwater-issues/issues/30 to
>> track and fix this problem, feel free to provide any other information on
>> that ticket if you hit similar problems. Thanks for your help finding this
>> bug!
>>
>>
>>
>> Looks like you’ve done the right things and went back to a slightly
>> higher number of subscribers (50) and looked at the stress log file and the
>> sprout log file. Unfortunately it looks like you’ve copied out the wrong
>> time period from the sprout logs: In your email below, the stress tool logs
>> are from 17:48, however the sprout logs that you’ve sent are from 5 hours
>> before at 12:27.
>>
>> Similar for your tcpdump – a good idea to have a look at this, but
>> unfortunately what you’ve copied into your email is only the register flow,
>> which is successful! :-)
>>
>>
>>
>> You’re probably pretty close finding the reason why the calls are failing
>> in the sprout logs, could you please make sure you have a look at the
>> timestamp of the time you ran the stress tool?
>>
>>
>>
>> Good luck and kind regards,
>>
>> Michael
>>
>>
>>
>>
>>
>> *From:* Clearwater [mailto:[email protected]]
>> *On Behalf Of *Sunil Kumar
>> *Sent:* 20 March 2018 14:57
>> *To:* [email protected]; Bennett Allen <
>> [email protected]>
>> *Subject:* Re: [Project Clearwater] CW team please help - stress testing
>>
>>
>>
>> Hi,
>>
>> I am facing problem in stress testing, Please look into the log. I am not
>> able to debug the problem.
>>
>>
>>
>> I have taken this from wireshark, actually i use tcpdump.
>>
>>
>>
>> REGISTER sip:ims.com SIP/2.0
>>
>> Via: SIP/2.0/TCP 127.0.1.1:34768;branch=z9hG4bK-784-1-0
>>
>> From: <sip:[email protected]>;tag=784SIPpTag001
>>
>> Content-Length: 0
>>
>> Require: Path
>>
>> Path: <sip:127.0.1.1:5082;transport=tcp;lr>
>>
>> P-Charging-Vector: icid-value=d4511351a7e24c5ff16243bac827fc3f1
>>
>> Supported: path
>>
>> To: <sip:[email protected]>
>>
>> Route: <sip:[email protected];lr>
>>
>> Max-Forwards: 70
>>
>> Contact: <sip:[email protected]:34768>;reg-id=1;+sip.instance="<
>> urn:uuid:00000000-0000-0000-0000-000000000001>"
>>
>> Call-ID: [email protected]
>>
>> CSeq: 1 REGISTER
>>
>> Expires: 3600
>>
>> Allow: INVITE, ACK, OPTIONS, CANCEL, BYE, UPDATE, INFO, REFER, NOTIFY,
>> MESSAGE, PRACK
>>
>> Supported: path, gruu
>>
>> Authorization: Digest username="[email protected]",realm="ims.com
>> ",uri="sip:ims.com",nonce="",response="",algorithm=Digest-MD5
>>
>> User-Agent: 00-00000-0000000000000 Phone IMS 10.0
>>
>> P-Access-Network-Info: IEEE-802.11;i-wlan-node-id=000
>> 000000000;country=GB;local-time-zone="2016-01-01T00:00:00-00:00"
>>
>> P-Visited-Network-ID: ims.com
>>
>>
>>
>> SIP/2.0 401 Unauthorized
>>
>> Via: SIP/2.0/TCP 127.0.1.1:34768;received=10.22
>> 4.61.13;branch=z9hG4bK-784-1-0
>>
>> Call-ID: [email protected]
>>
>> From: <sip:[email protected]>;tag=784SIPpTag001
>>
>> To: <sip:[email protected]>;tag=z9hG4bKPjDBaGZjqTrLQiDSlHihO36o
>> MPm7fxz2sQ
>>
>> CSeq: 1 REGISTER
>>
>> P-Charging-Vector: icid-value="d4511351a7e24c5ff16243bac827fc3f1"
>>
>> WWW-Authenticate: Digest  realm="ims.com",nonce="0e07c1b
>> 77b566f37",opaque="5171f001504c2c3a",algorithm=MD5,qop="auth"
>>
>> Content-Length:  0
>>
>>
>>
>> REGISTER sip:ims.com SIP/2.0
>>
>> Via: SIP/2.0/TCP 127.0.1.1:34768;branch=z9hG4bK-784-1-2
>>
>> From: <sip:[email protected]>;tag=784SIPpTag001
>>
>> Content-Length: 0
>>
>> Require: Path
>>
>> Path: <sip:127.0.1.1:5082;transport=tcp;lr>
>>
>> P-Charging-Vector: icid-value=d4511351a7e24c5ff16243bac827fc3f1
>>
>> Supported: path
>>
>> To: <sip:[email protected]>
>>
>> Route: <sip:[email protected];lr>
>>
>> Max-Forwards: 70
>>
>> Contact: <sip:[email protected]:34768>;reg-id=1;+sip.instance="<
>> urn:uuid:00000000-0000-0000-0000-000000000001>"
>>
>> Call-ID: [email protected]
>>
>> CSeq: 1 REGISTER
>>
>> Expires: 3600
>>
>> Allow: INVITE, ACK, OPTIONS, CANCEL, BYE, UPDATE, INFO, REFER, NOTIFY,
>> MESSAGE, PRACK
>>
>> Supported: path, gruu
>>
>> Authorization: Digest username="[email protected]",realm="ims.com
>> ",cnonce="66334873",nc=00000001,qop=auth,uri="sip:sprout.ims.com:5052",
>> nonce="0e07c1b77b566f37",response="788d4520717e4e7b29f7
>> fab43fdc448f",algorithm=MD5,opaque="5171f001504c2c3a"
>>
>> User-Agent: 00-00000-0000000000000 Phone IMS 10.0
>>
>> P-Access-Network-Info: IEEE-802.11;i-wlan-node-id=000
>> 000000000;country=GB;local-time-zone="2016-01-01T00:00:00-00:00"
>>
>> P-Visited-Network-ID: ims.com
>>
>>
>>
>> SIP/2.0 200 OK
>>
>> Service-Route: <sip:scscf.sprout.ims.com;transport=TCP;lr;orig;username=
>> 2010000039%40ims.com;nonce=0e07c1b77b566f37>
>>
>> Via: SIP/2.0/TCP 127.0.1.1:34768;received=10.22
>> 4.61.13;branch=z9hG4bK-784-1-2
>>
>> Call-ID: [email protected]
>>
>> From: <sip:[email protected]>;tag=784SIPpTag001
>>
>> To: <sip:[email protected]>;tag=z9hG4bKPjIvjh2DjwvU.vVNEv.nOiYA
>> fsZRgMjHDF
>>
>> CSeq: 1 REGISTER
>>
>> P-Charging-Vector: icid-value="d4511351a7e24c5ff16243bac827fc3f1"
>>
>> Supported: outbound
>>
>> Contact: <sip:[email protected]:34768>;expires=1800;+sip.instance=
>> "<urn:uuid:00000000-0000-0000-0000-000000000001>";reg-id=1;pub-gruu="
>> sip:[email protected];gr=urn:uuid:00000000-0000-0000-0000-000000000001"
>>
>> Require: outbound
>>
>> Path: <sip:127.0.1.1:5082;transport=tcp;lr>
>>
>> P-Associated-URI: <sip:[email protected]>
>>
>> Content-Length:  0
>>
>>
>>
>>
>>
>> thanks in advance, Please resply.
>>
>>
>>
>> cheers,
>>
>> sunil
>>
>>
>>
>>
>>
>> On Tue, Mar 20, 2018 at 6:53 PM, Sunil Kumar <[email protected]>
>> wrote:
>>
>> Hi,
>>
>> It is using some other port on stress node not 5082. Is this a problem,
>> if yes how can I fix this i have tried to open 5082 port on stress node
>> using *sudo ufw allow 5082/tcp, *but no effect.
>>
>> Please check the wireshark log:
>>
>>
>>
>> Frame 2406: 703 bytes on wire (5624 bits), 703 bytes captured (5624 bits)
>>
>> Ethernet II, Src: PcsCompu_ff:d2:88 (08:00:27:ff:d2:88), Dst:
>> PcsCompu_ab:71:0f (08:00:27:ab:71:0f)
>>
>> Internet Protocol Version 4, Src: 10.224.61.22, Dst: 10.224.61.13
>>
>> Transmission Control Protocol, Src Port: rlm-admin (5054), Dst Port:
>> 34312 (34312), Seq: 349, Ack: 2199, Len: 637
>>
>> Session Initiation Protocol (503)
>>
>>
>>
>>
>>
>> cheers,
>>
>> sunil
>>
>>
>>
>> On Tue, Mar 20, 2018 at 5:21 PM, Sunil Kumar <[email protected]>
>> wrote:
>>
>> Hi all,
>>
>> I have taken tcpdump also but there is no SIP message. Please through
>> some light on this problem. I am trying from last weak, not able to catch
>> the problem. Thanks in advance.
>>
>>
>>
>> cheers,
>>
>> Sunil
>>
>>
>>
>> On Tue, Mar 20, 2018 at 10:30 AM, Sunil Kumar <[email protected]>
>> wrote:
>>
>> Hi CW team,
>>
>> Anyone out there please help me. I am facing problem in stress testing. I
>> have installed CW manually. whenever I was running 1 or less than 20 it
>> give some errors like:
>>
>>
>>
>> *[]ubuntu@stress:~$ sudo /usr/share/clearwater/bin/run_stress ims.com
>> <http://ims.com/> 1 2*
>>
>> [sudo] password for ubuntu:
>>
>> Starting initial registration, will take 0 seconds
>>
>> Initial registration succeeded
>>
>> Starting test
>>
>> Test complete
>>
>> Traceback (most recent call last):
>>
>>   File "/usr/share/clearwater/bin/run_stress", line 340, in <module>
>>
>>     with open(CALLER_STATS) as f:
>>
>> IOError: [Errno 2] No such file or directory:
>> '/var/log/clearwater-sip-stress/18065_caller_stats.log'
>>
>>
>>
>>
>>
>> *[]ubuntu@stress:~$ sudo /usr/share/clearwater/bin/run_stress ims.com
>> <http://ims.com/> 10 5*
>>
>> Starting initial registration, will take 0 seconds
>>
>> Initial registration succeeded
>>
>> Starting test
>>
>> Test complete
>>
>> Traceback (most recent call last):
>>
>>   File "/usr/share/clearwater/bin/run_stress", line 346, in <module>
>>
>>     call_success_rate = 100 * float(row['SuccessfulCall(C)']) /
>> float(row['TotalCallCreated'])
>>
>> ZeroDivisionError: float division by zero
>>
>>
>>
>>
>>
>> *[]ubuntu@stress:~$ sudo /usr/share/clearwater/bin/run_stress
>> iind.intel.com <http://iind.intel.com/> 50 5*
>>
>> Starting initial registration, will take 0 seconds
>>
>> Initial registration succeeded
>>
>> Starting test
>>
>> Test complete
>>
>>
>>
>> Elapsed time: 00:03:41
>>
>> Start: 2018-03-20 17:46:43.268136
>>
>> End: 2018-03-20 17:51:31.363406
>>
>>
>>
>> Total calls: 2
>>
>> Successful calls: 0 (0.0%)
>>
>> Failed calls: 2 (100.0%)
>>
>> Unfinished calls: 0
>>
>>
>>
>> Retransmissions: 0
>>
>>
>>
>> Average time from INVITE to 180 Ringing: 0.0ms
>>
>> # of calls with 0-2ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 2-10ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 10-20ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 20-50ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 50-100ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 100-200ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 200-500ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 500-1000ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 1000-2000ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> # of calls with 2000+ms from INVITE to 180 Ringing: 0 (0.0%)
>>
>> Failed: call success rate 0.0% is lower than target 100.0%!
>>
>>
>>
>> Total re-REGISTERs: 8
>>
>> Successful re-REGISTERs: 8 (100.0%)
>>
>> Failed re-REGISTERS: 0 (0.0%)
>>
>>
>>
>> REGISTER retransmissions: 0
>>
>>
>>
>> Average time from REGISTER to 200 OK: 86.0ms
>>
>>
>>
>> Log files at /var/log/clearwater-sip-stress/18566_*
>>
>>
>>
>>
>>
>>
>>
>> *[]ubuntu@stress:~$ cat
>> /var/log/clearwater-sip-stress/18566_caller_errors.log*
>>
>> sipp: The following events occured:
>>
>> 2018-03-20      17:48:34.125945 1521548314.125945: Aborting call on
>> unexpected message for Call-Id '[email protected]': while expecting
>> '183' (index 2), received '*SIP/2.0 503 Service Unavailable*
>>
>> Via: SIP/2.0/TCP 127.0.1.1:42276;received=10.22
>> 4.61.13;branch=z9hG4bK-18576-1-0
>>
>> Record-Route: <sip:scscf.sprout.ims.com;tran
>> sport=TCP;lr;billing-role=charge-term>
>>
>> Record-Route: <sip:scscf.sprout. ims.com ;transport=TCP;lr;billing-role
>> =charge-orig>
>>
>> Call-ID: [email protected]
>>
>> From: <sip:2010000042@ ims.com >;tag=18576SIPpTag001
>>
>> To: <sip:2010000015@ ims.co>;tag=z9hG4bKPj1Lm9whhQM
>> slKrcZxnN6qCH0tb9Lj5Neu
>>
>> CSeq: 1 INVITE
>>
>> P-Charging-Vector: icid-value="18576SIPpTag001";orig-ioi= ims.com
>> ;term-ioi= ims.com
>>
>> P-Charging-Function-Addresses: ccf=0.0.0.0
>>
>> Content-Length:  0
>>
>>
>>
>>
>>
>> *[sprout]ubuntu@sprout:/var/log/sprout$ cat sprout_current.txt*
>>
>> --start msg--
>>
>>
>>
>> SIP/2.0 200 OK
>>
>> Via: SIP/2.0/TCP 10.224.61.22;rport=49294;recei
>> ved=10.224.61.22;branch=z9hG4bK-670172
>>
>> Call-ID: poll-sip-670172
>>
>> From: "poll-sip" <sip:[email protected]>;tag=670172
>>
>> To: <sip:[email protected]>;tag=z9hG4bK-670172
>>
>> CSeq: 670172 OPTIONS
>>
>> Content-Length:  0
>>
>>
>>
>>
>>
>> --end msg--
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug
>> common_sip_processing.cpp:275: Skipping SAS logging for OPTIONS response
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug pjsip: tdta0x7f35841b
>> Destroying txdata Response msg 200/OPTIONS/cseq=670172 (tdta0x7f35841bfe80)
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug
>> thread_dispatcher.cpp:270: Worker thread completed processing message
>> 0x7f34ec34a3e8
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug
>> thread_dispatcher.cpp:284: Request latency = 254us
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 254 for 0x1d8f778
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 254 for 0x1d8f820
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Info load_monitor.cpp:217:
>> Rate adjustment calculation inputs: err -0.981500, smoothed latency 185,
>> target latency 10000
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Info load_monitor.cpp:302:
>> Maximum incoming request rate/second unchanged at 2000.000000 (current
>> request rate is 0.200000 requests/sec, minimum threshold for a change is
>> 1000.000000 requests/sec).
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug
>> snmp_continuous_accumulator_by_scope_table.cpp:86: Accumulating sample
>> 2000ui into continuous accumulator statistic
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug
>> snmp_continuous_accumulator_by_scope_table.cpp:86: Accumulating sample
>> 2000ui into continuous accumulator statistic
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug utils.cpp:878: Removed
>> IOHook 0x7f35577d5e30 to stack. There are now 0 hooks
>>
>> 20-03-2018 12:27:23.234 UTC [7f35577d6700] Debug
>> thread_dispatcher.cpp:158: Attempting to process queue element
>>
>> 20-03-2018 12:27:25.235 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> TCP connection closed
>>
>> 20-03-2018 12:27:25.235 UTC [7f34f170a700] Debug
>> connection_tracker.cpp:67: Connection 0x7f34ec027358 has been destroyed
>>
>> 20-03-2018 12:27:25.235 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> TCP transport destroyed with reason 70016: End of file (PJ_EEOF)
>>
>> 20-03-2018 12:27:28.790 UTC [7f3573109700] Warning (Net-SNMP): Warning:
>> Failed to connect to the agentx master agent ([NIL]):
>>
>> 20-03-2018 12:27:31.277 UTC [7f34f170a700] Verbose pjsip:    tcplis:5054
>> TCP listener 10.224.61.22:5054: got incoming TCP connection from
>> 10.224.61.22:42848, sock=573
>>
>> 20-03-2018 12:27:31.277 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> tcp->base.local_name: 10.224.61.22
>>
>> 20-03-2018 12:27:31.277 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> TCP server transport created
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug pjsip: sip_endpoint.c
>> Processing incoming message: Request msg OPTIONS/cseq=670180
>> (rdata0x7f34ec027690)
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Verbose
>> common_sip_processing.cpp:87: RX 351 bytes Request msg OPTIONS/cseq=670180
>> (rdata0x7f34ec027690) from TCP 10.224.61.22:42848:
>>
>> --start msg--
>>
>>
>>
>> OPTIONS sip:[email protected]:5054 SIP/2.0
>>
>> Via: SIP/2.0/TCP 10.224.61.22;rport;branch=z9hG4bK-670180
>>
>> Max-Forwards: 2
>>
>> To: <sip:[email protected]:5054>
>>
>> From: poll-sip <sip:[email protected]>;tag=670180
>>
>> Call-ID: poll-sip-670180
>>
>> CSeq: 670180 OPTIONS
>>
>> Contact: <sip:10.224.61.22>
>>
>> Accept: application/sdp
>>
>> Content-Length: 0
>>
>> User-Agent: poll-sip
>>
>>
>>
>>
>>
>> --end msg--
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug uri_classifier.cpp:139:
>> home domain: false, local_to_node: true, is_gruu: false,
>> enforce_user_phone: false, prefer_sip: true, treat_number_as_phone: false
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug uri_classifier.cpp:172:
>> Classified URI as 3
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug
>> common_sip_processing.cpp:180: Skipping SAS logging for OPTIONS request
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug
>> thread_dispatcher.cpp:554: Recieved message 0x7f34ec027690 on worker thread
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug
>> thread_dispatcher.cpp:571: Admitted request 0x7f34ec027690 on worker thread
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug
>> thread_dispatcher.cpp:606: Incoming message 0x7f34ec027690 cloned to
>> 0x7f34ec34a3e8
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug
>> thread_dispatcher.cpp:625: Queuing cloned received message 0x7f34ec34a3e8
>> for worker threads with priority 15
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 0 for 0x1d93708
>>
>> 20-03-2018 12:27:31.314 UTC [7f34f170a700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 0 for 0x1d937b0
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug utils.cpp:872: Added
>> IOHook 0x7f353ffa6e30 to stack. There are now 1 hooks
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug
>> thread_dispatcher.cpp:178: Worker thread dequeue message 0x7f34ec34a3e8
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug
>> thread_dispatcher.cpp:183: Request latency so far = 57us
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug pjsip: sip_endpoint.c
>> Distributing rdata to modules: Request msg OPTIONS/cseq=670180
>> (rdata0x7f34ec34a3e8)
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug uri_classifier.cpp:139:
>> home domain: false, local_to_node: true, is_gruu: false,
>> enforce_user_phone: false, prefer_sip: true, treat_number_as_phone: false
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug uri_classifier.cpp:172:
>> Classified URI as 3
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug pjsip:       endpoint
>> Response msg 200/OPTIONS/cseq=670180 (tdta0x7f34d8091300) created
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Verbose
>> common_sip_processing.cpp:103: TX 282 bytes Response msg
>> 200/OPTIONS/cseq=670180 (tdta0x7f34d8091300) to TCP 10.224.61.22:42848:
>>
>> --start msg--
>>
>>
>>
>> SIP/2.0 200 OK
>>
>> Via: SIP/2.0/TCP 10.224.61.22;rport=42848;recei
>> ved=10.224.61.22;branch=z9hG4bK-670180
>>
>> Call-ID: poll-sip-670180
>>
>> From: "poll-sip" <sip:[email protected]>;tag=670180
>>
>> To: <sip:[email protected]>;tag=z9hG4bK-670180
>>
>> CSeq: 670180 OPTIONS
>>
>> Content-Length:  0
>>
>>
>>
>>
>>
>> --end msg--
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug
>> common_sip_processing.cpp:275: Skipping SAS logging for OPTIONS response
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug pjsip: tdta0x7f34d809
>> Destroying txdata Response msg 200/OPTIONS/cseq=670180 (tdta0x7f34d8091300)
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug
>> thread_dispatcher.cpp:270: Worker thread completed processing message
>> 0x7f34ec34a3e8
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug
>> thread_dispatcher.cpp:284: Request latency = 129us
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 129 for 0x1d8f778
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 129 for 0x1d8f820
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug load_monitor.cpp:341:
>> Not recalculating rate as we haven't processed 20 requests yet (only 1).
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug utils.cpp:878: Removed
>> IOHook 0x7f353ffa6e30 to stack. There are now 0 hooks
>>
>> 20-03-2018 12:27:31.314 UTC [7f353ffa7700] Debug
>> thread_dispatcher.cpp:158: Attempting to process queue element
>>
>> 20-03-2018 12:27:31.333 UTC [7f34f0f09700] Verbose httpstack.cpp:327:
>> Process request for URL /ping, args (null)
>>
>> 20-03-2018 12:27:31.333 UTC [7f34f0f09700] Verbose httpstack.cpp:68:
>> Sending response 200 to request for URL /ping, args (null)
>>
>> 20-03-2018 12:27:33.315 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> TCP connection closed
>>
>> 20-03-2018 12:27:33.316 UTC [7f34f170a700] Debug
>> connection_tracker.cpp:67: Connection 0x7f34ec027358 has been destroyed
>>
>> 20-03-2018 12:27:33.316 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> TCP transport destroyed with reason 70016: End of file (PJ_EEOF)
>>
>> 20-03-2018 12:27:33.328 UTC [7f34f170a700] Verbose pjsip:    tcplis:5053
>> TCP listener 10.224.61.22:5053: got incoming TCP connection from
>> 10.224.61.22:49356, sock=573
>>
>> 20-03-2018 12:27:33.328 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> tcp->base.local_name: 10.224.61.22
>>
>> 20-03-2018 12:27:33.328 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> TCP server transport created
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug pjsip: sip_endpoint.c
>> Processing incoming message: Request msg OPTIONS/cseq=670182
>> (rdata0x7f34ec027690)
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Verbose
>> common_sip_processing.cpp:87: RX 351 bytes Request msg OPTIONS/cseq=670182
>> (rdata0x7f34ec027690) from TCP 10.224.61.22:49356:
>>
>> --start msg--
>>
>>
>>
>> OPTIONS sip:[email protected]:5053 SIP/2.0
>>
>> Via: SIP/2.0/TCP 10.224.61.22;rport;branch=z9hG4bK-670182
>>
>> Max-Forwards: 2
>>
>> To: <sip:[email protected]:5053>
>>
>> From: poll-sip <sip:[email protected]>;tag=670182
>>
>> Call-ID: poll-sip-670182
>>
>> CSeq: 670182 OPTIONS
>>
>> Contact: <sip:10.224.61.22>
>>
>> Accept: application/sdp
>>
>> Content-Length: 0
>>
>> User-Agent: poll-sip
>>
>>
>>
>>
>>
>> --end msg--
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug uri_classifier.cpp:139:
>> home domain: false, local_to_node: true, is_gruu: false,
>> enforce_user_phone: false, prefer_sip: true, treat_number_as_phone: false
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug uri_classifier.cpp:172:
>> Classified URI as 3
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug
>> common_sip_processing.cpp:180: Skipping SAS logging for OPTIONS request
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug
>> thread_dispatcher.cpp:554: Recieved message 0x7f34ec027690 on worker thread
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug
>> thread_dispatcher.cpp:571: Admitted request 0x7f34ec027690 on worker thread
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug
>> thread_dispatcher.cpp:606: Incoming message 0x7f34ec027690 cloned to
>> 0x7f34ec34a3e8
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug
>> thread_dispatcher.cpp:625: Queuing cloned received message 0x7f34ec34a3e8
>> for worker threads with priority 15
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 0 for 0x1d93708
>>
>> 20-03-2018 12:27:33.329 UTC [7f34f170a700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 0 for 0x1d937b0
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug utils.cpp:872: Added
>> IOHook 0x7f354d7c1e30 to stack. There are now 1 hooks
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug
>> thread_dispatcher.cpp:178: Worker thread dequeue message 0x7f34ec34a3e8
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug
>> thread_dispatcher.cpp:183: Request latency so far = 102us
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug pjsip: sip_endpoint.c
>> Distributing rdata to modules: Request msg OPTIONS/cseq=670182
>> (rdata0x7f34ec34a3e8)
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug uri_classifier.cpp:139:
>> home domain: false, local_to_node: true, is_gruu: false,
>> enforce_user_phone: false, prefer_sip: true, treat_number_as_phone: false
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug uri_classifier.cpp:172:
>> Classified URI as 3
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug pjsip:       endpoint
>> Response msg 200/OPTIONS/cseq=670182 (tdta0x7f34ec004350) created
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Verbose
>> common_sip_processing.cpp:103: TX 282 bytes Response msg
>> 200/OPTIONS/cseq=670182 (tdta0x7f34ec004350) to TCP 10.224.61.22:49356:
>>
>> --start msg--
>>
>>
>>
>> SIP/2.0 200 OK
>>
>> Via: SIP/2.0/TCP 10.224.61.22;rport=49356;recei
>> ved=10.224.61.22;branch=z9hG4bK-670182
>>
>> Call-ID: poll-sip-670182
>>
>> From: "poll-sip" <sip:[email protected]>;tag=670182
>>
>> To: <sip:[email protected]>;tag=z9hG4bK-670182
>>
>> CSeq: 670182 OPTIONS
>>
>> Content-Length:  0
>>
>>
>>
>>
>>
>> --end msg--
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug
>> common_sip_processing.cpp:275: Skipping SAS logging for OPTIONS response
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug pjsip: tdta0x7f34ec00
>> Destroying txdata Response msg 200/OPTIONS/cseq=670182 (tdta0x7f34ec004350)
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug
>> thread_dispatcher.cpp:270: Worker thread completed processing message
>> 0x7f34ec34a3e8
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug
>> thread_dispatcher.cpp:284: Request latency = 232us
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 232 for 0x1d8f778
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug
>> event_statistic_accumulator.cpp:32: Accumulate 232 for 0x1d8f820
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug load_monitor.cpp:341:
>> Not recalculating rate as we haven't processed 20 requests yet (only 2).
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug utils.cpp:878: Removed
>> IOHook 0x7f354d7c1e30 to stack. There are now 0 hooks
>>
>> 20-03-2018 12:27:33.329 UTC [7f354d7c2700] Debug
>> thread_dispatcher.cpp:158: Attempting to process queue element
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:244:
>> Reraising all alarms with a known state
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1001.1 alarm
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1005.1 alarm
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1011.1 alarm
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1012.1 alarm
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1013.1 alarm
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1004.1 alarm
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1002.1 alarm
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1009.1 alarm
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Debug alarm.cpp:303:
>> AlarmReqAgent: queue overflowed
>>
>> 20-03-2018 12:27:34.994 UTC [7f358b7fe700] Status alarm.cpp:37: sprout
>> issued 1010.1 alarm
>>
>> 20-03-2018 12:27:35.330 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> TCP connection closed
>>
>> 20-03-2018 12:27:35.330 UTC [7f34f170a700] Debug
>> connection_tracker.cpp:67: Connection 0x7f34ec027358 has been destroyed
>>
>> 20-03-2018 12:27:35.330 UTC [7f34f170a700] Verbose pjsip: tcps0x7f34ec02
>> TCP transport destroyed with reason 70016: End of file (PJ_EEOF)
>>
>>
>>
>>
>>
>>
>>
>> all calls are failing I don't know what is going on, I am newbie to CW
>> please guide some solution it will be great help.
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Sunil
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Clearwater mailing list
>> [email protected]
>> http://lists.projectclearwater.org/mailman/listinfo/
>> clearwater_lists.projectclearwater.org
>>
>>
>>
>> _______________________________________________
>> Clearwater mailing list
>> [email protected]
>> http://lists.projectclearwater.org/mailman/listinfo/
>> clearwater_lists.projectclearwater.org
>>
>>
>>
>>
>> _______________________________________________
>> Clearwater mailing list
>> [email protected]
>> http://lists.projectclearwater.org/mailman/listinfo/
>> clearwater_lists.projectclearwater.org
>>
>>
>>
>
>
_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org

Reply via email to