Hi Linda, Thank you for all the information. It's interesting that it works with XLite but then not when you use SIPp. We would suggest having a look at the SIP invite messages for both methods (using a packet capture tool such as tcpdump) and seeing if you can spot any differences between them? If you can spot any differences this would give an idea of what needs changing in SIPp. Let us know how it goes. Thank you, Ben
-----Original Message----- From: Clearwater [mailto:clearwater-boun...@lists.projectclearwater.org] On Behalf Of clearwater-requ...@lists.projectclearwater.org Sent: 07 March 2018 03:36 To: clearwater@lists.projectclearwater.org Subject: Clearwater Digest, Vol 59, Issue 15 Send Clearwater mailing list submissions to clearwater@lists.projectclearwater.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org or, via email, send a message with subject or body 'help' to clearwater-requ...@lists.projectclearwater.org You can reach the person managing the list at clearwater-ow...@lists.projectclearwater.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Clearwater digest..." Today's Topics: 1. Re: [Clearwater] Could not get subscriber data from HSS (wang wulin) ---------------------------------------------------------------------- Message: 1 Date: Wed, 7 Mar 2018 03:35:15 +0000 From: wang wulin <wangwu...@hotmail.com> To: "clearwater@lists.projectclearwater.org" <clearwater@lists.projectclearwater.org> Subject: Re: [Project Clearwater] [Clearwater] Could not get subscriber data from HSS Message-ID: <hk0pr03mb27555ae520e6302a7fbc93fbb9...@hk0pr03mb2755.apcprd03.prod.outlook.com> Content-Type: text/plain; charset="gb2312" Hi Ben, Sorry, I did not receive neither Adm's answer earlier nor your answer. There seems something issue with my hotmail. 1) I deoloyed clearwater via one testcase named "cloudify_ims" from opnfv/functest project, where 3 steps are run: * deploy a VNF orchestrator (Cloudify) * deploy a Clearwater vIMS (IP Multimedia Subsystem) VNF from this orchestrator based on a TOSCA blueprint defined in [1] * run suite of signaling tests on top of this VNF [1]: https://github.com/Orange-OpenSource/opnfv-cloudify-clearwater/archive/master.zip 8 instances are created and I also created a new instance named "stress_node" according to this guidance: http://clearwater.readthedocs.io/en/stable/Clearwater_stress_testing.html Please see https://hastebin.com/edixewanip.rb for detailed instance info or below bash-4.3# openstack server list +--------------------------------------+--------------------------------------------+--------+--------------------------------------------------+----------------------+-----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+--------------------------------------------+--------+--------------------------------------------------+----------------------+-----------+ | 50b934e1-131f-4599-a7aa-85d744fda72c | stress_node | ACTIVE | cloudify_ims_network=10.67.79.14 | ubuntu_14.04 | m1.small | | bc7da755-b58b-48a5-8258-dda0bdb2b92d | server_clearwater-opnfv_bono_host_zk2kcg | ACTIVE | cloudify_ims_network=10.67.79.17, 192.168.33.211 | ubuntu_14.04 | m1.small | | 3fe2b0bf-4b12-42a1-8be7-a9222c46428f | server_clearwater-opnfv_sprout_host_s6a4tr | ACTIVE | cloudify_ims_network=10.67.79.20 | ubuntu_14.04 | m1.small | | c032dacd-eb8b-48a8-b314-9520f7ff64da | server_clearwater-opnfv_dime_host_kag04s | ACTIVE | cloudify_ims_network=10.67.79.18 | ubuntu_14.04 | m1.small | | 7d09aa0f-bbeb-4b62-be56-95bfd097def0 | server_clearwater-opnfv_vellum_host_ldyhbh | ACTIVE | cloudify_ims_network=10.67.79.6 | ubuntu_14.04 | m1.small | | 3005c542-0ee5-4430-9f56-f221ccb1f104 | server_clearwater-opnfv_ellis_host_s7cahy | ACTIVE | cloudify_ims_network=10.67.79.15, 192.168.33.201 | ubuntu_14.04 | m1.small | | 04b894ff-1c07-428a-9a8b-72db5335e843 | server_clearwater-opnfv_homer_host_m88cq7 | ACTIVE | cloudify_ims_network=10.67.79.9 | ubuntu_14.04 | m1.small | | 4614747e-2cb7-4450-837e-aa9c51af8e53 | server_clearwater-opnfv_bind_host_643c4w | ACTIVE | cloudify_ims_network=10.67.79.10, 192.168.33.208 | ubuntu_14.04 | m1.small | | 8f26ed2c-8095-43b1-8c40-13ddc080fbc9 | server_clearwater-opnfv_proxy_host_nocslx | ACTIVE | cloudify_ims_network=10.67.79.5 | ubuntu_14.04 | m1.small | | 7ae3612f-67a5-42c6-ab1b-94dbd067272a | cloudify_manager | ACTIVE | cloudify_ims_network=10.67.79.11, 192.168.33.207 | cloudify_manager_4.0 | m1.medium | +--------------------------------------+--------------------------------------------+--------+--------------------------------------------------+----------------------+-----------+ 2) root@dime-5y29tl:/var/log/homestead# cat homestead_current.txt ........ 07-03-2018 03:30:03.145 UTC Status load_monitor.cpp:285: Maximum incoming request rate/second unchanged - only handled 21 requests in last 5644ms, minimum threshold for a change is 4495.530273 07-03-2018 03:30:05.674 UTC Status alarm.cpp:62: homestead issued 1501.1 alarm 07-03-2018 03:30:12.148 UTC Status load_monitor.cpp:285: Maximum incoming request rate/second unchanged - only handled 21 requests in last 9005ms, minimum threshold for a change is 7172.617188 07-03-2018 03:30:28.567 UTC Status load_monitor.cpp:285: Maximum incoming request rate/second unchanged - only handled 21 requests in last 16420ms, minimum threshold for a change is 13078.776367 07-03-2018 03:30:33.027 UTC Status load_monitor.cpp:285: Maximum incoming request rate/second unchanged - only handled 21 requests in last 4460ms, minimum threshold for a change is 3552.456787 07-03-2018 03:30:35.674 UTC Status alarm.cpp:62: homestead issued 1501.1 alarm 07-03-2018 03:30:45.640 UTC Status load_monitor.cpp:285: Maximum incoming request rate/second unchanged - only handled 22 requests in last 12613ms, minimum threshold for a change is 10046.443359 3) root@dime-5y29tl:/var/log/homestead# cat /etc/clearwater/shared_config # Deployment definitions home_domain=clearwater.opnfv sprout_hostname=sprout.clearwater.local chronos_hostname=10.67.79.10:7253 hs_hostname=hs.clearwater.local:8888 hs_provisioning_hostname=hs-prov.clearwater.local:8889 sprout_impi_store=vellum.clearwater.local sprout_registration_store=vellum.clearwater.local cassandra_hostname=vellum.clearwater.local chronos_hostname=vellum.clearwater.local ralf_session_store=vellum.clearwater.local ralf_hostname=ralf.clearwater.local:10888 xdms_hostname=homer.clearwater.local:7888 signaling_dns_server=10.67.79.10 # Email server configuration smtp_smarthost=localhost smtp_username=username smtp_password=password email_recovery_sender=clearwa...@example.org # Keys signup_key=secret turn_workaround=secret ellis_api_key=secret ellis_cookie_key=secret 4) root@dime-5y29tl:/var/log/homestead# cat /etc/clearwater/local_config local_ip=10.67.79.18 public_ip= public_hostname=dime-5y29tl.clearwater.local etcd_cluster=10.67.79.10 etcd_cluster_key=bind 5) root@dime-5y29tl:/var/log/homestead# monit summary Monit 5.18.1 uptime: 35d 17h 57m Service Name Status Type node-dime-5y29tl.clearwater.... Running System snmpd_process Running Process ralf_process Running Process ntp_process Running Process nginx_process Running Process homestead_process Running Process homestead-prov_process Running Process clearwater_queue_manager_pro... Running Process etcd_process Running Process clearwater_diags_monitor_pro... Running Process clearwater_config_manager_pr... Running Process clearwater_cluster_manager_p... Running Process ralf_uptime Status ok Program poll_ralf Status ok Program nginx_ping Status ok Program nginx_uptime Status ok Program monit_uptime Status ok Program homestead_uptime Status ok Program poll_homestead Status ok Program check_cx_health Status ok Program poll_homestead-prov Status ok Program clearwater_queue_manager_uptime Status ok Program etcd_uptime Status ok Program poll_etcd_cluster Status ok Program poll_etcd Status ok Program 6) It works well when I make a call via XLite? but failed when I run via SIPp, see here: https://hastebin.com/ujilusimik.hs root@stress-linda:/usr/share/clearwater/sip-stress# nice -n-20 /usr/share/clearwater/bin/sipp -i 10.67.79.16 -sf ./sip-stress.xml 10.67.79.17 -t tn -s clearwater.opnfv -inf ./users.csv.2 -users 50 -m 50 -default_behaviors all,-bye -max_socket 65000 -max_reconnect -1 -reconnect_sleep 0 -reconnect_close 0 -send_timeout 4000 -recv_timeout 12000 ------------------------------ Scenario Screen -------- [1-9]: Change Screen -- Users (length) Port Total-time Total-calls Remote-host 50 (0 ms) 5060 40633.25 s 50 10.67.79.17:5060(TCP) Call limit reached (-m 50), 0.000 s period 0 ms scheduler resolution 0 calls (limit 50) Peak was 50 calls, after 0 s 0 Running, 3 Paused, 0 Woken up 0 dead call msg (discarded) 14 out-of-call msg (discarded) 1 open sockets Messages Retrans Timeout Unexpected-Msg Pause [0ms/10:00] 50 0 REGISTER ----------> 50 0 401 <---------- 50 0 0 0 REGISTER ----------> 50 0 200 <---------- 50 0 0 0 REGISTER ----------> 50 0 401 <---------- 50 0 0 0 REGISTER ----------> 50 0 200 <---------- 50 0 0 0 Pause [ 10.0s] 50 0 REGISTER ----------> B-RTD1 1072 0 200 <---------- E-RTD1 1072 0 0 0 REGISTER ----------> B-RTD1 1072 0 200 <---------- E-RTD1 1072 0 0 0 Pause [$reg_pause] 895 0 Pause [$pre_call_delay] 177 0 INVITE ----------> B-RTD2 177 0 100 <---------- 176 0 0 0 INVITE <---------- 1 0 0 0 100 <---------- 1 0 0 0 INVITE <---------- 126 0 23 27 100 ----------> 127 0 180 ----------> 127 0 180 <---------- 127 0 0 0 Pause [$call_answer] 127 0 200 ----------> 127 0 200 <---------- 127 0 0 0 ACK ----------> 127 0 ACK <---------- 127 0 0 0 UPDATE ----------> 127 0 UPDATE <---------- 127 0 0 0 200 ----------> 127 0 200 <---------- E-RTD2 127 0 0 0 Pause [$call_length] 127 0 BYE ----------> B-RTD3 127 0 BYE <---------- 127 0 0 0 200 ----------> 127 0 200 <---------- E-RTD3 127 0 0 0 Pause [$post_call_delay] 127 0 ------------------------------ Test Terminated -------------------------------- ----------------------------- Statistics Screen ------- [1-9]: Change Screen -- Start Time | 2018-03-06 06:34:20:871 1520318060.871621 Last Reset Time | 2018-03-06 17:51:34:128 1520358694.128324 Current Time | 2018-03-06 17:51:34:131 1520358694.131933 -------------------------+---------------------------+------------------ -------------------------+---------------------------+-------- Counter Name | Periodic value | Cumulative value -------------------------+---------------------------+------------------ -------------------------+---------------------------+-------- Elapsed Time | 00:00:00:003 | 11:17:13:260 Call Rate | 0.000 cps | 0.001 cps -------------------------+---------------------------+------------------ -------------------------+---------------------------+-------- Incoming call created | 0 | 0 OutGoing call created | 0 | 50 Total Call created | | 50 Current Call | 0 | -------------------------+---------------------------+------------------ -------------------------+---------------------------+-------- Successful call | 0 | 0 Failed call | 0 | 50 -------------------------+---------------------------+------------------ -------------------------+---------------------------+-------- Response Time register | 00:00:00:000 | 00:00:00:004 Response Time call-set | 00:00:00:000 | 00:00:06:032 Response Time call-tea | 00:00:00:000 | 00:00:00:003 Call Length | 00:00:00:000 | 01:49:29:242 ------------------------------ Test Terminated -------------------------------- 2018-03-06 17:51:34:125 1520358694.125924: Aborting call on unexpected message for Call-Id '35-18929@10.67.79.16': while expecting 'INVITE' (index 23), received 'SIP/2.0 480 Temporarily Unavailable Via: SIP/2.0/TCP 10.67.79.16:23879;rport=23879;received=10.67.79.16;branch=z9hG4bK-2010000030-35-21.000000-1 Record-Route: <sip:scscf.sprout.clearwater.local:5054;transport=TCP;lr;billing-role=charge-term> Record-Route: <sip:scscf.sprout.clearwater.local:5054;transport=TCP;lr;billing-role=charge-orig> Record-Route: <sip:10.67.79.17:5058;transport=TCP;lr> Record-Route: <sip:zad374VHyb@bono-i2sn7d.clearwater.local:5060;transport=TCP;lr> Call-ID: 2010000030-21.000000///35-18929@10.67.79.16 From: <sip:2010000030@clearwater.opnfv>;tag=18929SIPpTag00351234 To: <sip:2010000031@clearwater.opnfv>;tag=z9hG4bKPjtsB4PSC2qg1mvZI4DWvb3yPmB6pp-3j- CSeq: 333 INVITE Content-Length: 0 Thanks, Linda ________________________________ ???: wang wulin <wangwu...@hotmail.com> ????: 2018?3?2? 16:39 ???: clearwater@lists.projectclearwater.org ??: ??: [Clearwater] Could not get subscriber data from HSS Hi Clearwater Team, Here is the result after running ""/usr/share/clearwater/bin/run_stress clearwater.opnfv 16 10 "." 2018-03-02 08:25:12.757782 1519979112.757782: Aborting call on unexpected message for Call-Id '1-21165@10.67.79.16': while expecting '183' (index 2), received 'SIP/2.0 480 Temporarily Unavailable Via: SIP/2.0/TCP 10.67.79.16:54572;received=10.67.79.16;branch=z9hG4bK-21165-1-0 Record-Route: <sip:scscf.sprout.clearwater.local:5054;transport=TCP;lr;billing-role=charge-term> Record-Route: <sip:scscf.sprout.clearwater.local:5054;transport=TCP;lr;billing-role=charge-orig> Record-Route: <sip:10.67.79.17:5058;transport=TCP;lr> Record-Route: <sip:/bDGU121V2@bono-i2sn7d.clearwater.local:5060;transport=TCP;lr> Call-ID: 1-21165@10.67.79.16 From: <sip:2770000012@clearwater.opnfv>;tag=21165SIPpTag001 To: <sip:2770000015@clearwater.opnfv>;tag=z9hG4bKPjtk6L-d0QjzkZB8rbq-CMdJGW9zpHhMbt CSeq: 1 INVITE Content-Length: 0 Total calls: 1 Successful calls: 0 (0.0%) Failed calls: 1 (100.0%) Unfinished calls: 0 Retransmissions: 0 Average time from INVITE to 180 Ringing: 0.0ms # of calls with 0-2ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 2-10ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 10-20ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 20-50ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 50-100ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 100-200ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 200-500ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 500-1000ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 1000-2000ms from INVITE to 180 Ringing: 0 (0.0%) # of calls with 2000+ms from INVITE to 180 Ringing: 0 (0.0%) Failed: call success rate 0.0% is lower than target 100.0%! Total re-REGISTERs: 5 Successful re-REGISTERs: 5 (100.0%) Failed re-REGISTERS: 0 (0.0%) REGISTER retransmissions: 0 Average time from REGISTER to 200 OK: 12.0ms Thanks, Linda ________________________________ ???: wang wulin <wangwu...@hotmail.com> ????: 2018?3?2? 15:38 ???: clearwater@lists.projectclearwater.org ??: [Clearwater] Could not get subscriber data from HSS Hi Clearwater Team, I deployed a stress node according to this guidance: http://clearwater.readthedocs.io/en/stable/Clearwater_stress_testing.html, and tried to run stress via "/usr/share/clearwater/bin/run_stress clearwater.opnfv 10 10 ". "Register" works now, but Call step still failed: [cid:29209895-4920-4757-a1c0-5756bb653935] I got the error from /var/log/sprout: "Error hssconnection.cpp:704: Could not get subscriber data from HSS" [cid:3b29aefa-363d-4be8-98af-0fff4b0d0c52] I only executed the 4 commands below on Vellum Node: 1) . /etc/clearwater/config; for DN in {2770000000..2770000099} ; do echo sip:$DN@$home_domain,$d...@clearwater.opnf<mailto:d...@clearwater.opnf>v,clearwater.opnfv,7kkzTyGW ;done > users.csv 2) cd /usr/share/clearwater/crest-prov/src/metaswitch/crest/tools/ && python bulk_create.py users.csv 3) ./users.create_xdm.sh 4) ./users.create_homestead.sh Did I miss some other steps? Do you know if we "HSS" node is also required? 2) I also got the error from Dime node: root@dime-5y29tl:/var/log/ralf# vim /var/log/syslog Mar 2 06:37:01 dime-5y29tl issue-alarm: zmq_msg_recv: Invalid argument Mar 2 06:37:02 dime-5y29tl config-manager[10778]: dropped request: 'issue-alarm config-manager 8500.3' Mar 2 06:37:08 dime-5y29tl issue-alarm: zmq_msg_recv: Invalid argument Mar 2 06:37:18 dime-5y29tl issue-alarm: message repeated 12 times: [ zmq_msg_recv: Invalid argument] Mar 2 06:37:18 dime-5y29tl queue-manager[10648]: dropped request: 'issue-alarm queue-manager 9001.1' Mar 2 06:37:21 dime-5y29tl queue-manager[10648]: dropped request: 'issue-alarm queue-manager 9002.1' Mar 2 06:37:22 dime-5y29tl issue-alarm: zmq_msg_recv: Invalid argument root@dime-5y29tl:/var/log/ralf# vim /var/log/monit.log [UTC Mar 2 02:31:43] error : 'poll_etcd_cluster' '/usr/share/clearwater/bin/poll_etcd_cluster.sh' failed with exit status (1) -- 1 [UTC Mar 2 02:31:43] info : 'poll_etcd_cluster' exec: /bin/bash [UTC Mar 2 02:31:53] info : 'poll_etcd_cluster' status succeeded [status=0] -- zmq_msg_recv: Resource temporarily unavailable Any help would be much appreciated! Thanks, Linda -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.projectclearwater.org/pipermail/clearwater_lists.projectclearwater.org/attachments/20180307/67b024b5/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 26935 bytes Desc: pastedImage.png URL: <http://lists.projectclearwater.org/pipermail/clearwater_lists.projectclearwater.org/attachments/20180307/67b024b5/attachment.png> -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 97367 bytes Desc: pastedImage.png URL: <http://lists.projectclearwater.org/pipermail/clearwater_lists.projectclearwater.org/attachments/20180307/67b024b5/attachment-0001.png> ------------------------------ Subject: Digest Footer _______________________________________________ Clearwater mailing list Clearwater@lists.projectclearwater.org http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org ------------------------------ End of Clearwater Digest, Vol 59, Issue 15 ****************************************** _______________________________________________ Clearwater mailing list Clearwater@lists.projectclearwater.org http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org